I'm surprised cashiers are listed as the job most likely to be replaced in the graphics in the article. Unattended self-checkouts have been possible for like 15 years now, and it feels like 5 years ago was peak self-checkout, with some stores drawing back from entirely self-checkout experiences and expansion slowing in others.
My understanding is that the obstacles to stores replacing the remaining cashiers with self-checkouts is not so much "we need a better machine" but (a) the shoplifting deterrence effect of staffing, (b) customers are slower at packing than cashiers, which can cause queues and other inefficiencies.
HelloUsername · 4h ago
> customers are slower at packing than cashiers
Where I'm from, cashiers don't pack the groceries of customers
derekp7 · 3h ago
And,even if customers are slower, having 20 self checkout kiosks is still faster than 3 cashiers.
My biggest gripe with self checkout is if I make a mistake, they call it theft. I haven't been trained and certified on the register, so I can't accept the potential for getting a criminal record due to a scanning error. Example: Walmart will successfully "beep" when scanning, but will display "system busy" on the screen, and the scanned items don't show up on the receipt.
echelon · 4h ago
Self-checkout won.
Every time I go to Target there are two lanes manned by staff. Everything else is self-checkout. This is every Target I go to in my state.
Every time I go to Home Depot, there is one person manning the checkout. Everything else is self-checkout.
Even fast food places have self-checkout stalls that seem to be growing in popularity. In the last few years, McDonalds has been rapidly deploying the tech.
Boba places. Self-checkout has been promoted to first-class and is the only way to order at several of them.
It's everywhere. When I was growing up, Kroger used to have every single checkout aisle staffed. Now they have one or two.
I'm honestly shocked that the perception is that self-checkout hasn't won. It's everywhere and dominates the checkout modalities.
> the shoplifting deterrence effect of staffing
This is all cost modeled. They have lots of cameras and security staff by the door. Even if the tech doesn't work, the mere threat of getting caught is enough to stop most losses. The business accepts that they won't catch everything. They're still saving money by using automated checkout.
abenga · 1h ago
Isn't the perceived worsening of service (maybe more the various boycotts) one of the reasons Target is hemorrhaging foot traffic?
sundaeofshock · 43m ago
Hmmm… I have not seen that. Everything I’ve read is the boycotts have really hit Target’s foot traffic.
aetherson · 4h ago
It's uneven, I do see places that seem to have kept self-checkout minimal or nonexistent, but I think that you're right that overall it's winning.
purplezooey · 2h ago
As others mentioned, self-checkout has been widespread for 30 years. They had it at most stores in the 1990s. The answer is, simply, it's cheaper for companies to hire someone at a non-living wage than it is to install and maintain these systems. Perhaps if we had some policies with teeth -- if you're going to hire a person, they must be able to afford to plausibly live and work in the area. Else, your business isn't actually a functioning business and needs reconsideration.
currymj · 4h ago
software is really weird as a test case for understanding AI automation on white collar work.
it seems exceptionally well-suited to AI-based automation because software engineering has already needed to figure out how to efficiently cope with humans who sometimes produce code which may have defects. most obviously automated testing and type systems. also a lot of programming tasks have verifiable solutions so it's also better for training. it seems natural that the most obviously successful AI tools are for coding.
yet the software industry has also been absorbing waves of automation for 70 years. Fortran and COBOL were referred to as "automated programming" and there was a narrative that these new tools would make it easy for non-specialists to program. this time may be different, or it may just be another wave of automation.
i think software has pretty unique properties among white collar jobs, and I would hesitate to draw conclusions about other industries based on AI progress in software engineering.
alephnerd · 4h ago
The issue is people are assuming 100% automation and job replacement by AI/ML overnight - that is NOT happening in the near future.
Realistically, we are going to see 20-30% reductions in headcount in the near-to-medium term. THIS IS STILL CATASTROPHIC.
A number of earlier stage companies I've funded have already been heavily utilizing automation to simplify code generation or scaffolding/project ops work. They use the cost savings to hire experienced SWEs at high base salaries and are able to hit the same development metrics as they would have if they had a large team of average paid SWEs. On the BDR side, they are using a massive amount of video/audio automation to scale out cold calling or first impressions, so reducing the need to hire teams of BDRs hitting the phones all the time. And finally, they are automating tier 1/2 support ticket responses and communication so reducing the need for teams of support engineers spending time basically respondng to customers with the polite equivalnet of "read the docs".
Basically, a Series A startup that would have had a staff of 50 employees 10 years ago can essentially output the exact same as a Series A startup with a headcount of 20 employees today, and with a tangible path to FCF positivity.
This is a net reduction in jobs, and a significant one at that, because most people just cannot upskill - it's hard.
sitzkrieg · 5h ago
curious how the ai experts are so wrong on truck drivers. apparent to anyone whos been on the road the US road system will be completely revamped before self driving cargo trucks are viable
mjr00 · 5h ago
Self-driving vehicles are the perfect example of how something that seems so close can be so far away.
April 29, 2014 - "Milken 2014: Driverless cars due in five years"[0]
Nov 24, 2015 - "Ford is 5 years away from self-driving cars"[1]
Oct 20, 2016 - "A Driverless Tesla Will Travel From L.A. to NYC by 2017"[2]
Now general consensus is level 5 autonomous self-driving is decades away, at least.
> Self-driving vehicles are the perfect example of how something that seems so close can be so far away.
Remember that this was before transformers, LLMs, and more recently VLM/VLAMs, though.
I'm not sure to what extent these are already integrated in self-driving hardware, but I would not be surprised if we see a big improvement in self-driving due to related technologies soon, especially with the smaller models becoming far more (efficient and) potent.
romaaeterna · 5h ago
The general consensus may be whatever it is, but a Tesla can drive from L.A. to NYC today with no interventions other than at charging stops.
staunton · 5h ago
However, it might kill you and other people at any moment, so be ready for that intervention...
romaaeterna · 5h ago
And the same is true for a human driver. I imagine that you could differentiate on accidents per vehicle mile.
yusina · 4h ago
That's a false equivalence. Autonomous vehicles need to be significantly safer than human drivers to be allowed on the streets. If a human driver kills or injures somebody or damages property, they are responsible and will face consequences. An autonomous car won't.
The typical tech person will reply to this with some variant of "that shouldn't matter". Well, it does.
romaaeterna · 4h ago
You are responding to this thread as if we were arguing for these cars "being allowed on the streets". That is not the discussion here. Instead, we are talking about AI capacity.
mjr00 · 4h ago
Saying that a Tesla can drive autonomously from LA to NYC, except it can't in reality because other cars are on the road and it might kill someone, is an odd way to frame it.
It's like saying Windows 95 doesn't have any security flaws, as long as you don't connect it to the internet.
romaaeterna · 3h ago
You: Autonomous driving is years away
Me: My Tesla takes me 20min to work every day without human intervention, and on long trips of hundreds of miles
You: This is illegal because it might kill someone
Call the police on Tesla, I guess?
mjr00 · 2h ago
You mean your "self-driving" Tesla where you're still sitting in the driver's seat with your hands on the wheel (they are on the wheel, unless you want to admit to a crime)?
romaaeterna · 2h ago
Are they?
babyent · 4h ago
Waymo works really well.
aetherson · 4h ago
Albeit in limited areas and, as far as we can tell, with pretty high unit costs.
But I think the smart money is that Waymo will get appreciably better in the 5-10 year timeline. Perhaps massively, crazily better, but at least steadily, noticeably better.
lm28469 · 4h ago
For now Waymo is only used in a few grid cities in southern US where the weather is extremely nice. It's a prime example of the 80/20 problem and a prime example of what the comment your replying to mentions.
babyent · 55m ago
I didn't realize technology had to be 100% applicable to everything at 100% capability on day one.
My bad.
echelon · 4h ago
Self-driving is solved and is scaling up.
They're deploying Waymo in Atlanta now and they're already driving around sans-humans.
Atlanta is not a grid. Atlanta has crazy and dangerous road design. Random roads built in random places. Potholes, lots of hills, sharp angles, lack of shoulders, no margin or protection from 45mph oncoming traffic (90mph delta with ZERO margin or division), weird infrastructure, five and six and even seven way intersections, pedestrians, jaywalkers, motorcycle gangs that ride in the wrong lanes. You name it, Atlanta's got it.
Atlanta has lots of weather. Tremendous amounts of subtropical rain in the summer, fog, and other low visibility and dangerous conditions. Intermittent summer rain that pours so hard that it's a literal rain curtain white out. Seasonal tornadoes.
Self-driving is here now. It'll be everywhere by the end of the decade. I can't wait to buy my own Waymo-equipped vehicle.
lm28469 · 4h ago
It was supposed to be everywhere by 2020 and even earlier lol, I'll believe it when I'll see it, until then it's wishful thinking or investors talk
echelon · 3h ago
> I'll believe it when I'll see it
That time when people had blocky cell phones. Then they were in everybody's pocket.
That's self-driving now.
It's here, just uneven. You can ride in one today. Soon, it'll be taking over for trucking, delivering everything you can think of on-demand, and revolutionizing how we think about transit and travel.
Instacart and Uber Eats and Amazon same-day are blocky cell phones. Soon we'll have instant logistics. A ten times faster Amazon. A world where almost everything can be delivered instantly and cheaply.
Trains and subways and metros are rigid and inflexible transit corridors. Soon we'll have last-mile transit pods for everyone that connect everything and everywhere and make suburbs highly desirable and sought after again.
Self-driving will be like the internet or smartphones all over again. And we're right at the precipice.
The financial gradients of all of this ensure that it'll happen and that it'll happen fast and all at once once we get past the inflection point.
But back to your point: you wouldn't call those blocky cell phone users non-cellphone users. The tech just wasn't evenly distributed. It got there in short time, because the tech was world-changing. So too shall it be with self-driving. The economic advantages are inevitable.
kgwgk · 2h ago
> You can ride in one today.
And you could do it in 2020. It seems unlikely that it will be “everywhere” in 2030 unless you give that word a quite restricted meaning.
horhay · 5h ago
The thing about that recent rollout of self-driving trucks is they picked a stretch of road connecting Houston to another shipping point in a very straight line. And they bragged about 2000 "unassisted" miles on that stretch of road which is ~250 miles in length. So they're basically championing the idea that their trucks which have driven about less than 10 trips without an on-vehicle driver in that area is competent enough to be relied upon in a real work capacity.
Whether this amount of success is proof that there won't be any issues with the tech in that area, remains to be seen. Hell, they're not even interested yet in talking about how this may pan out outside their Houston trial runs.
ArtTimeInvestor · 5h ago
It's important to note that compared to human labor, AI is extremely cheap.
Society as a whole will produce more products and services and have to work less for this.
In other words, products and services will become much cheaper.
In the future, living on social welfare might offer a lifestyle more luxurious than working full-time is now.
So if most peoples life goes from "I have to work hard to pay my bills" to "I don't have to work as hard and can afford more", that will not drive people to go fight on the streets.
staunton · 5h ago
If most people don't work (or most people's work contributes only a small fraction of the economy), where do they derive their political capital that gives them any social welfare at all? Why would the powerful AI overlords owned by large conglomerates give them anything or care what they think?
aianus · 5h ago
People who don't work today have more political capital than workers because they have the free time to vote and attend primaries and protests and town hall meetings and things like that.
cardanome · 4h ago
Because the productivity growth in the last decades thanks to automation and many other factors has benefited workers? (Hint: It has not.)
The questions of who will benefit from AI is a political questions. As power is currently firmly in the had of the capital class, AI will only make the rich richer and the poor poorer.
Wages are not based on how much value you produce but how much it cost to reproduce your labor. How much it cost for you to stay alive and be able to work and feed your family. What kind of living standard that entails is a political matter and depends on how strong your unions are, how willing you are to fight for it and so on.
Same with welfare. It is something which was fought for and if the power balance is such that they can get away with giving you less, they will do. In fact some rich pricks would gladly turn you into soylent green if they could get away with it.
yusina · 4h ago
> As power is currently firmly in the had of the capital class, AI will only make the rich richer and the poor poorer.
I have hope in decentral technologies. Solar power is a rare example. If things gp right then in a few decades, your typical household will just not depend on anybody anymore in terms of energy needs. That's a significant shift.
That is, if big corps don't manage to capture that somehow, though I can't currently see how. But it has happened before. The www and email used to be as decentralized as you could imagine. Nowadays rarely any email gets sent that doesn't come from or go to a gmail server, or definitely not outside one of the biggest 3 providers. Similarly with the www. Everything is so centralized now, it's a great success of big tech.
dantheman · 4h ago
Hint it has - the standard of living is far higher now; medical care is far better; and the number of people subsidized by those working has greatly increased (this may not be considered a benefit, but it's where the extra income is going).
yusina · 4h ago
If your time axis is hundreds of years, yes. But the last decades don't fit that theory.
perching_aix · 5h ago
> It's important to note that compared to human labor, AI is extremely cheap.
Frontier models are being provided at a loss right now to my "knowledge", and they're already as crap as you'd want to go, if not even crappier, for worker replacement in my experience.
Buttons840 · 5h ago
Frontier models probably cost hundreds a month, so they are being provided at a loss. Humans cost thousands a month. Both can be true.
sundaeofshock · 5m ago
OpenAI lost $5 Billion in 2024. They have talked about their programming agent running $20k/month. There will be a price point where AI coding assistant makes zero sense for many companies.
TheRoque · 3h ago
It costs hundreds a month for the customer, but way more for the companies. Try claude code, it's pay as you go, and it's like .5$ to 1$ per task, and it doesn't even complete it well, so if you make it work 24/7, I'm pretty sure it's gonna swallow your money for a result weaker than with a regular engineer. Moreover, you need someone to check what the AI is doing AND you need to prompt it properly (unless the CEO is willing to do that himself)
So right now, I have a really hard time seeing LLMs as a well-spent money if they were bought at their real cost
perching_aix · 4h ago
Sure, what I was going for was more in the sense that they'll likely increase in consumer price in the future. It will still possibly worth it, just an important thing to consider.
Jordan-117 · 4h ago
The fruits of virtually all the productivity gains in the last 40 years has gone straight to the top while the 99% has stagnated amidst rising costs in essentials. Broadly-shared prosperity has only come about through strong unions, regulation, and redistributive taxation, which seems politically farther away than it's been in decades. The wealthiest elite don't just want wealth, they want power over others -- why would they tolerate an economic system where anyone can live a decent life independent of work?
lm28469 · 4h ago
This is both overly optimistic and overly simplistic.
It doesn't matter how many products and services your society shits out, what matters is how you organize it. And clearly we're already way past the point where the amount of products and services is a bottleneck to quality of life. Access to healthcare, job security, home ownership, the ability to afford kids, good food, all of these things are going down, not up, despite the fact that as a whole we're much better off than in the last century.
Virtually no one can sustain a family on a single salary, it was the norm not so long ago
Sure enough we now have access to instant brain rot doomscrolling services, infinite low quality entertainment, vast amount of ultra processed food, &c. but if anythings these things are distractions.
yusina · 4h ago
That view is refreshingly naïve and equally wrong. Prices are determined by the market, not by how hard something was to produce.
And as exhibit A I provide progress in automation in the last decades (enormous) in relation to the median buying power of the population in developed nations.
It just doesn't work the way you describe, but I have sympathies for naïvely believing that it does.
westmeal · 5h ago
I see it another way. I think the products will become cheaper to produce but the prices will remain the same or higher. Just because something is cheaper does not mean businesses are obligated to reduce price. If anything it's the opposite (as long as you can get away with it).
SoftTalker · 4h ago
In a competitive market, prices tend to fall to the zero profit point. Of course nobody survives on zero profit for long, so that's not exactly the reality, but it is the pressure. If an Item becomes cheaper to produce, and Company A and Company B both produce the item, one of them will drop the price to get more sales at a slightly lower profit per Item. And back and forth.
TheRoque · 3h ago
That's just the theory though, if you look at the videogame industry, the prices keep rising and lazy productions are more expensive than ever (e.g. oblivion remastered).
Same goes for my movie theater, when did it x2 its price in the past 5 years while having less employees ? Oh wait, there's only one theater in town and I don't have a car, that's probably why.
kgwgk · 5h ago
Living on social welfare now offers a lifestyle as luxurious as working full-time in the past.
whatshisface · 5h ago
Only in a handful of countries.
skepticATX · 4h ago
Access to cheap goods may lead to short term satisfaction, but will never lead to long term fulfillment.
plemer · 4h ago
Cheaper products is a very narrow definition of luxury.
schmichael · 4h ago
[Citation needed]
throwaway894345 · 5h ago
I think you’re very generously assuming that the benefits of AI will be distributed evenly across society and not horded by the people who can afford to train hundred billion parameter models. That’s not how technology has worked in the past, so I don’t see why it should work differently this time.
badgersnake · 5h ago
Or we’ll produce the same amount of stuff, the billionaires controlling the AI will run off with all the money and the rest of us will starve whilst anyone that argues is hunted down by autonomous drones.
DataDaemon · 5h ago
so deflation and print money like crazy for UBI?
keybored · 5h ago
Either the state of political education is terrible or people have too much money stuffed in their ears to understand concepts that normal people need just to survive (at times).
Taken for granted that AI is extremely cheap. Product and services don’t become cheaper just because they are easier to produce. One way to get rich is artificial scarcity.
Now the only way I know that people in most jobs got better pay was to organize. Including striking. You can’t bully people too much if they can shut down sectors of the local community or country.
Cheap AI can make things cheap. Or not. Who owns it? Some dozens of rich people? Do they have to make it cheap? Do governments have to make living on social welfare (who says there will be social welfare?[1]) “more luxurious than working full-time is now” when you look around and... there’s no one to pressure them to do that any more?
[1] Some people with a lot of money stuffed in their ears think that social welfare is something that the poors/the lazy can just fall back on indefinitely. They know nothing about means testing or the fact that social welfare has been fought against by the very rich (ideologues who are savvy, they certainly didn’t stuff money in their ears) since social welfare was introduced.
techpineapple · 5h ago
I don’t quite understand the full extent of the AI will create jobs argument. In prior revolutions, say automation, automation created jobs because like building and maintaining robots is a whole thing. Building and maintaining AI is a whole thing, but if you’re talking about wholesale automation of intelligence, the fundamental question I have is:
What jobs will AI create that AI cannot itself do?
In the automation revolution, the bots were largely single purpose, the bots couldn’t be created by bots. There could and probably will be trillions of jobs created by AI, but they will be done by trillions of agents. How many jobs do you really create if ChatgGPT is so multi-purpose, it only takes one say 250k company to support it.
mjr00 · 5h ago
> What jobs will AI create that AI cannot itself do?
Part of the problem is the definition of "AI" is extremely nebulous. In your case, you seem to be talking about an AGI which can self-improve, while also having some physical interface letting it interact with the real world. This reality may be 6 months away, 6 years away, or 600 years away.
Given the current state of LLMs it's much more likely they will create jobs, or change workflows in existing jobs, rather than wholesale replace humans. The recent public spectacle of Microsoft's state-of-the-art Github Copilot Agent[0] shows we're quite far away from AI agents wholesale replacing even very junior positions for knowledge work.
Yeah but LLMs won't stay at the current state though. I don't understand this argument. Is there any particular reason to believe that they'll stop getting better at this point?
mjr00 · 3h ago
> Is there any particular reason to believe that they'll stop getting better at this point?
Are they better now? When ChatGPT came out I could ask it for the song lyrics to The Beatles - Come Together. Now the response is
> Sorry, I can't provide the full lyrics to "Come Together" by The Beatles. However, I can summarize the song or discuss its meaning if you'd like.
You can argue that ChatGPT "knowing" that 9.11 is less than 9.9 or counting the 3 r's in strawberry means it's better now, but which am I more likely to ask an LLM?
techpineapple · 5h ago
Yes, I do think there is, that LLMs are a paradigm with a certain limited functionality and not a path to AGI. I actually find this assumption of constant and never ending improvement of LLMs interesting, almost all technology has diminishing returns in terms of improvements, why would LLMs be the exception? Why not believe that all future iterations of the LLMs will be gradual improvements of current behavior rather than LLMs necessarily can become super-intelligent AGI?
vages · 4h ago
Most of the adults alive today have lived sometime when CPU speeds doubled every 12–24 months (see Moores law). This has conditioned many to believe that all information technologies improve exponentially, while, in reality, most are not.
techpineapple · 5h ago
in a sense I think this is my question, is anyone writing any of these think pieces providing specific definitions?
karmakaze · 2h ago
Jevons Paradox likely applies here. There could be an initial reduction in jobs, but longer term humans using AI will reduce the cost (increase the efficiency) of those jobs which will increase demand more than merely satisfy it.
Basically any job that uses a word processor, spreadsheet, drawing tool, etc will all become more efficient and if Jevons Paradox applies, demand for those things will increase beyond the reduction due to efficiency gains.
I can imagine that for many fields it will be cheaper to have humans use AI (in the near term) rather than try to make fully automated systems that require no/little supervision.
TheCoreh · 5h ago
The jobs created by the need to build and maintain robots (and industrial machinery in general) are very few compared to the amount of jobs the machines replaced. The new jobs that the industrial and other technological revolutions created were mostly in other economical sectors, like services and commerce.
visarga · 5h ago
> What jobs will AI create that AI cannot itself do?
AI lack skin-in-the-game, they cannot bear responsibility for outcomes. They also don't experience desires or needs, so they depend on humans for that too.
To make an AI useful you need to apply it to a problem, in other words it is in a specific problem context that AI shows utility. Like Linux, you need to use it for something to get benefits. Providing this problem space for AI is our part. So you cannot separate AI usefulness from people, problems are distributed across society, non-fungible.
I am not very worried about jobs, we tend to prefer growth to efficiency. In a world of AI, humans will remain the differentiating factor between companies.
If AI is roughly where IT is in the 60's, we might see actually decreased productivity for a while until people (yes people) figure out how to use it effectively.
pixl97 · 3h ago
The question is how long? The exchange rate of information is very fast these days and there are typically less broad secrets on tool usage and management these days.
skywhopper · 5h ago
You’re just making things up here. LLMs or other forms of “AI” can’t do most jobs, so it’s silly to speculate what will happen when it replaces humans in those jobs it can’t actually perform.
To the extent it can automate tasks under the direction of humans, it’s not even clear it makes those humans more productive, but it is clear that it harms those humans’ own skillsets (beyond prompt engineering).
pixl97 · 3h ago
>so it’s silly to speculate what will happen when it replaces humans in those jobs it can’t actually perform
Why?
If you're waiting around for AI to do these things by the time it happens it will be getting hit like a truck. The speed of technological implementation these days is very fast, especially when compared to the speed of regulation.
Moreso we are not just seeing improvement in things like LLMs, there is broad improvements in robotics and generalized behavior in AI.
alephnerd · 5h ago
They can't do most jobs, but they can still reduce the amount of jobs.
For example, even outsourcing giants like Infosys shrank hiring by 20% AND increased personnel utilization from 70% to 85% just by mandating employees to start using code-gen tools, and as a result were able to significantly enhance margins.
techpineapple · 5h ago
Technically the article is making things up, and I’m responding to those assertions.
skywhopper · 5h ago
Lots of silly things in this article, but in re the poll data they lean on so heavily, the “AI experts” are “people who work in the AI industry”. So, in other words, people who are highly biased by the money inflow that’s chasing human job destruction.
alephnerd · 5h ago
No.
Most of the recent (5 years) labor shocks from automation have been in white collar industries. White collar professions are those most likely to be impacted by AI/ML driven automation gains [0]
Industrial automation has already taken over skilled manufacturing jobs and HN's demographic of white collar professionals didn't complain then. Those who were impacted during that era of automation and outsourcing will not come out to the streets in solidarity with white collar employees getting impacted by Copilot or domain-specific models. Frankly, they mostly vote for Trump and HNers mostly voted for Harris.
Imo, a major undercurrent of the current culture war across the west is that it is morphing into a class war due to economic segregation leading to echo chambers in consumption [1] along with vocations [2]. As a result, White collar professionals live in an entirely different ecosystem from blue collar professionals.
No political force will fight for onshoring services jobs or incorporating barriers to reduce automation in most white collar jobs. We've already seen this realignment with skilled trades dominated unions like the UAW backing the current admin, but white collar leaning unions like AFL-CIO opposing them.
This article itself falls into the same trap of failing to recognize that HNers jobs are most exposed to AI and automation [3]. Low skill jobs have already been automated and commodified significantly over the last 25 years.
Agreed. AI looks to be a war between professional opinion-havers and fact-rememberers and the future. They finally see automation as dangerous because no one will need their opinion or their recall any more. The working class were already automated to death (and the opinion-havers cheered it on), because their working class opinions hasn't been taken into consideration since Taylor and Ford figured out that we can eliminate individual knowledge in physical work and push that knowledge upwards into the institution itself. They can just lay you off and train some new ones, and the new hires will have a worse contract than you had.
The reason working people's jobs haven't been automated away is because people are cheaper and more conscientious than robots. Even worse, the robots are owned by famous rent-seekers who see you as prey, so defending yourself against the robots as a customer adds to the overhead of choosing them. How do you know it's not going to cost you 10x as much to fix/update them as it cost you last year? You don't, because you were locked into a scammer who unilaterally decided to raise rates on you or the entire industry, or simply got in over their head and shut down, leaving you without a vendor. Minimum wage employees don't do that.
Now asking some dipshit McKinsey consultant or some social media expert for 10 ideas to increase sales? LLMs can give you 100, summarize them, and rank them with references. They are only getting better at this. We criticize them because they only give us a list of obvious answers, but the reason we hire recent graduates is for them to give us the list of obvious answers they learned while being educated in their specialty. "Creativity" gets assigned to whoever has the social capital to be seen as a legitimate and worthy "creative," or, more accurately, the person with the money either chooses somebody to run the show who they suspect has a magic secret formula or aura, or they take on that identity themselves.
That white collar middle-class magic is gone. People who could do difficult arithmetic quickly used to make a lot of money, too.
> No political force will fight for onshoring services jobs or incorporating barriers to reduce automation in most white collar jobs.
Only people who get protection would be the ones that are unified enough to have the institutions to launder cash to give to politicians. Doctors, realtors, etc.
alephnerd · 4h ago
> AI looks to be a war between professional opinion-havers and fact-rememberers and the future
I disagree. I personally think SWEs (who are what most people think of as tech bros), accountants, back-office roles, and others are at the upper economic tier of society now - as wage and educational data has clearly shown.
These are the roles most at risk for automation, not because jobs will be 100% automated, but because automation now exists to reduce headcount by 20-30% and still generate the same output. This is what already happened in skilled trades over the last 40 years.
keybored · 4h ago
Tech CEO writes about hypothetical worker strikes and protests caused by AI. Accompanied with AI stock photo with moronic placcards.
I’m learning that every topic that people read should be about AI. Also the hypothetical ones. One I barely read one month ago was some pseudo-psychological self-care piece about how to soothe ourselves as we have to deal with the inevitability of AI changing our very self-identity. They do insist too much not to raise concerns about their intentions. Oh, and very regular commenters on the Web are also very weirdly insisting that if some specific percentage of “your code” or higher is not AI then some bad stuff is about to happen to you. Also whispers about management demanding... yes their very own percentages for how much AI code output. Could tech CEOs be motivated to write about every hypothetical thing that is not a thing yet (but will be immediately surely, probably tomorrow)? Of course it’s not my place to be cynical. Everyone who writes about AI has good intentions until proven otherwise.
Now, any CEO is comfortable with writing about protests against job loss in an American context. Because they just are flashes in time.
> Bias is a little different. America did flood the streets after George Floyd in a more sustained manner, for many months, but major reforms stalled out once marches faded and partisan lines re-hardened.
Do you remember when Occupy Wallstreet changed America and put the 99% in charge? Me neither.[1] America has these heroic uprisings spread across years. Then what happens? The out of touch people in power listen a bit more attentively?
[1] In hindsight (three minutes later) that’s unfair and hyperbolic. What could have more realistically happened was that some organizing came out of it that would have survived to this day. Organizations that might have changed names and members and “leadership” (as everyone calls it now) but that anyone with a slight interest in the matter could point to and say, oh yeah that’s a product of Occupy Wallstreet, right there.
> How do we help the displaced?
Of course the chosen focus is on helping the displaced. Do they deserve handouts and from whom? But this misses the mark, right? If AI hypers like himself are correct then “helping” becomes at best outdated, outmoded. I’m sure the powers would be would love the dichotomy of helpless displaced people getting either help from the government or the “AI winners” (he uses scare quotes for reasons).[2]
But if AI job displacement becomes massive this is a dead end. Displace enough people without making new jobs and jobs themselves become outmoded. Remember that people don’t strike or protest “for jobs” because they intrinsically want a wage job... they want to live and survive and this is the means they have to do that. So what if jobs are gone, then what? Then the means of AI should be socialized. If AI displaces jobs we don’t need jobs any more. Because we can just use AI. If we own the AI... not if dozens of billionaires own them. Some CEOs might disagree on this point.
[2] The right-wing libertarians are the ones who love Universal Basic Income.
> Two historical moments that resemble this pattern: When automated textile frames wiped out skilled jobs in the U.K. in the early 1800s, the Luddite riots turned violent enough (including killing a factory owner) that Parliament dispatched roughly 12,000 troops to restore order, which concluded with over a dozen executions.
Noteworthy that he chose a violent crackdown example (sorry, restoring order) with no mention of any fruits that were won.
I have no doubt that he will report any violent occurrences of restoring order as dispassionately as he does here on this hypothetical piece.
> Do protests even matter?
> I need to dig in more, but the short answer seems to indicate yes, in a few ways. They seem make the people who attend them more politically motivated, at least for that issue. As a result, if enough people go to them, then they can swing elections.
Political scientists are funny. The kind of people who make ostensibly democratic participation into rat maze experiments.
The link just seems to be about whether or not protests can affect elections. Like they specifically and narrowly studied that. That’s a bit boring?
I don’t think the Vietnam War protests just helped impact elections. Are you kidding me? The anti-war movement up until Reagan forced that administration to make their Central America interference clandestine. Well, it was bad enough that they did it. But popular resistance was strong enough that they forced them to at least do some things differently. Imagine a repressed or apathetic society where the government just supports anti-democratic paramilitary groups in foreign countries out in the open. That’s worse in a sense.
There’s some anecdote about Kissinger and Nixon in a car where Nixon says that he’s afraid that the crowd could kill them. That’s a bit more than swinging an election.
I’m sure that Apartheid South Africa was crushed in part because of protests and boycots. Although it wasn’t on my news back then.
... But what do we get according to this article? Swinging an election. Wow. That sure will motivate Americans with their two-party system where either party serve slightly different sets of corporate sectors.
Notice these people. These people who insist that you protesting and organizing and working to change things, well it might move the needle electorally. That’s what they want you to think. That that’s all the political power you have. You can change the ballots that are presented to you. And you can persuade fellow voters. That’s it. Now go home.
We are gonna have to change more than that if the AI hypers are correct.
immibis · 5h ago
No. Feudalism is the normal state of human society, and everyone is okay with that.
My understanding is that the obstacles to stores replacing the remaining cashiers with self-checkouts is not so much "we need a better machine" but (a) the shoplifting deterrence effect of staffing, (b) customers are slower at packing than cashiers, which can cause queues and other inefficiencies.
Where I'm from, cashiers don't pack the groceries of customers
My biggest gripe with self checkout is if I make a mistake, they call it theft. I haven't been trained and certified on the register, so I can't accept the potential for getting a criminal record due to a scanning error. Example: Walmart will successfully "beep" when scanning, but will display "system busy" on the screen, and the scanned items don't show up on the receipt.
Every time I go to Target there are two lanes manned by staff. Everything else is self-checkout. This is every Target I go to in my state.
Every time I go to Home Depot, there is one person manning the checkout. Everything else is self-checkout.
Even fast food places have self-checkout stalls that seem to be growing in popularity. In the last few years, McDonalds has been rapidly deploying the tech.
Boba places. Self-checkout has been promoted to first-class and is the only way to order at several of them.
It's everywhere. When I was growing up, Kroger used to have every single checkout aisle staffed. Now they have one or two.
I'm honestly shocked that the perception is that self-checkout hasn't won. It's everywhere and dominates the checkout modalities.
> the shoplifting deterrence effect of staffing
This is all cost modeled. They have lots of cameras and security staff by the door. Even if the tech doesn't work, the mere threat of getting caught is enough to stop most losses. The business accepts that they won't catch everything. They're still saving money by using automated checkout.
it seems exceptionally well-suited to AI-based automation because software engineering has already needed to figure out how to efficiently cope with humans who sometimes produce code which may have defects. most obviously automated testing and type systems. also a lot of programming tasks have verifiable solutions so it's also better for training. it seems natural that the most obviously successful AI tools are for coding.
yet the software industry has also been absorbing waves of automation for 70 years. Fortran and COBOL were referred to as "automated programming" and there was a narrative that these new tools would make it easy for non-specialists to program. this time may be different, or it may just be another wave of automation.
i think software has pretty unique properties among white collar jobs, and I would hesitate to draw conclusions about other industries based on AI progress in software engineering.
Realistically, we are going to see 20-30% reductions in headcount in the near-to-medium term. THIS IS STILL CATASTROPHIC.
A number of earlier stage companies I've funded have already been heavily utilizing automation to simplify code generation or scaffolding/project ops work. They use the cost savings to hire experienced SWEs at high base salaries and are able to hit the same development metrics as they would have if they had a large team of average paid SWEs. On the BDR side, they are using a massive amount of video/audio automation to scale out cold calling or first impressions, so reducing the need to hire teams of BDRs hitting the phones all the time. And finally, they are automating tier 1/2 support ticket responses and communication so reducing the need for teams of support engineers spending time basically respondng to customers with the polite equivalnet of "read the docs".
Basically, a Series A startup that would have had a staff of 50 employees 10 years ago can essentially output the exact same as a Series A startup with a headcount of 20 employees today, and with a tangible path to FCF positivity.
This is a net reduction in jobs, and a significant one at that, because most people just cannot upskill - it's hard.
April 29, 2014 - "Milken 2014: Driverless cars due in five years"[0]
Nov 24, 2015 - "Ford is 5 years away from self-driving cars"[1]
Oct 20, 2016 - "A Driverless Tesla Will Travel From L.A. to NYC by 2017"[2]
Now general consensus is level 5 autonomous self-driving is decades away, at least.
[0] https://www.usatoday.com/story/money/cars/2014/04/29/milken-...
[1] https://www.businessinsider.com/ford-is-5-years-away-from-se...
[2] https://www.nbcnews.com/business/autos/driverless-tesla-will...
Remember that this was before transformers, LLMs, and more recently VLM/VLAMs, though.
I'm not sure to what extent these are already integrated in self-driving hardware, but I would not be surprised if we see a big improvement in self-driving due to related technologies soon, especially with the smaller models becoming far more (efficient and) potent.
The typical tech person will reply to this with some variant of "that shouldn't matter". Well, it does.
It's like saying Windows 95 doesn't have any security flaws, as long as you don't connect it to the internet.
Me: My Tesla takes me 20min to work every day without human intervention, and on long trips of hundreds of miles
You: This is illegal because it might kill someone
Call the police on Tesla, I guess?
But I think the smart money is that Waymo will get appreciably better in the 5-10 year timeline. Perhaps massively, crazily better, but at least steadily, noticeably better.
My bad.
They're deploying Waymo in Atlanta now and they're already driving around sans-humans.
Atlanta is not a grid. Atlanta has crazy and dangerous road design. Random roads built in random places. Potholes, lots of hills, sharp angles, lack of shoulders, no margin or protection from 45mph oncoming traffic (90mph delta with ZERO margin or division), weird infrastructure, five and six and even seven way intersections, pedestrians, jaywalkers, motorcycle gangs that ride in the wrong lanes. You name it, Atlanta's got it.
Atlanta has lots of weather. Tremendous amounts of subtropical rain in the summer, fog, and other low visibility and dangerous conditions. Intermittent summer rain that pours so hard that it's a literal rain curtain white out. Seasonal tornadoes.
Self-driving is here now. It'll be everywhere by the end of the decade. I can't wait to buy my own Waymo-equipped vehicle.
That time when people had blocky cell phones. Then they were in everybody's pocket.
That's self-driving now.
It's here, just uneven. You can ride in one today. Soon, it'll be taking over for trucking, delivering everything you can think of on-demand, and revolutionizing how we think about transit and travel.
Instacart and Uber Eats and Amazon same-day are blocky cell phones. Soon we'll have instant logistics. A ten times faster Amazon. A world where almost everything can be delivered instantly and cheaply.
Trains and subways and metros are rigid and inflexible transit corridors. Soon we'll have last-mile transit pods for everyone that connect everything and everywhere and make suburbs highly desirable and sought after again.
Self-driving will be like the internet or smartphones all over again. And we're right at the precipice.
The financial gradients of all of this ensure that it'll happen and that it'll happen fast and all at once once we get past the inflection point.
But back to your point: you wouldn't call those blocky cell phone users non-cellphone users. The tech just wasn't evenly distributed. It got there in short time, because the tech was world-changing. So too shall it be with self-driving. The economic advantages are inevitable.
And you could do it in 2020. It seems unlikely that it will be “everywhere” in 2030 unless you give that word a quite restricted meaning.
Whether this amount of success is proof that there won't be any issues with the tech in that area, remains to be seen. Hell, they're not even interested yet in talking about how this may pan out outside their Houston trial runs.
Society as a whole will produce more products and services and have to work less for this.
In other words, products and services will become much cheaper.
In the future, living on social welfare might offer a lifestyle more luxurious than working full-time is now.
So if most peoples life goes from "I have to work hard to pay my bills" to "I don't have to work as hard and can afford more", that will not drive people to go fight on the streets.
https://www.epi.org/productivity-pay-gap/
The questions of who will benefit from AI is a political questions. As power is currently firmly in the had of the capital class, AI will only make the rich richer and the poor poorer.
Wages are not based on how much value you produce but how much it cost to reproduce your labor. How much it cost for you to stay alive and be able to work and feed your family. What kind of living standard that entails is a political matter and depends on how strong your unions are, how willing you are to fight for it and so on.
Same with welfare. It is something which was fought for and if the power balance is such that they can get away with giving you less, they will do. In fact some rich pricks would gladly turn you into soylent green if they could get away with it.
I have hope in decentral technologies. Solar power is a rare example. If things gp right then in a few decades, your typical household will just not depend on anybody anymore in terms of energy needs. That's a significant shift.
That is, if big corps don't manage to capture that somehow, though I can't currently see how. But it has happened before. The www and email used to be as decentralized as you could imagine. Nowadays rarely any email gets sent that doesn't come from or go to a gmail server, or definitely not outside one of the biggest 3 providers. Similarly with the www. Everything is so centralized now, it's a great success of big tech.
Frontier models are being provided at a loss right now to my "knowledge", and they're already as crap as you'd want to go, if not even crappier, for worker replacement in my experience.
So right now, I have a really hard time seeing LLMs as a well-spent money if they were bought at their real cost
It doesn't matter how many products and services your society shits out, what matters is how you organize it. And clearly we're already way past the point where the amount of products and services is a bottleneck to quality of life. Access to healthcare, job security, home ownership, the ability to afford kids, good food, all of these things are going down, not up, despite the fact that as a whole we're much better off than in the last century.
Virtually no one can sustain a family on a single salary, it was the norm not so long ago
https://www.researchgate.net/profile/Haruki-Seitani/publicat...
Housing is more and more out of reach
https://www.researchgate.net/publication/335721006/figure/fi...
Most of the productivity gain doesn't translate into income:
https://files.epi.org/charts/img/235212-28502-body.png
Sure enough we now have access to instant brain rot doomscrolling services, infinite low quality entertainment, vast amount of ultra processed food, &c. but if anythings these things are distractions.
And as exhibit A I provide progress in automation in the last decades (enormous) in relation to the median buying power of the population in developed nations.
It just doesn't work the way you describe, but I have sympathies for naïvely believing that it does.
Same goes for my movie theater, when did it x2 its price in the past 5 years while having less employees ? Oh wait, there's only one theater in town and I don't have a car, that's probably why.
Taken for granted that AI is extremely cheap. Product and services don’t become cheaper just because they are easier to produce. One way to get rich is artificial scarcity.
Now the only way I know that people in most jobs got better pay was to organize. Including striking. You can’t bully people too much if they can shut down sectors of the local community or country.
Cheap AI can make things cheap. Or not. Who owns it? Some dozens of rich people? Do they have to make it cheap? Do governments have to make living on social welfare (who says there will be social welfare?[1]) “more luxurious than working full-time is now” when you look around and... there’s no one to pressure them to do that any more?
[1] Some people with a lot of money stuffed in their ears think that social welfare is something that the poors/the lazy can just fall back on indefinitely. They know nothing about means testing or the fact that social welfare has been fought against by the very rich (ideologues who are savvy, they certainly didn’t stuff money in their ears) since social welfare was introduced.
What jobs will AI create that AI cannot itself do?
In the automation revolution, the bots were largely single purpose, the bots couldn’t be created by bots. There could and probably will be trillions of jobs created by AI, but they will be done by trillions of agents. How many jobs do you really create if ChatgGPT is so multi-purpose, it only takes one say 250k company to support it.
Part of the problem is the definition of "AI" is extremely nebulous. In your case, you seem to be talking about an AGI which can self-improve, while also having some physical interface letting it interact with the real world. This reality may be 6 months away, 6 years away, or 600 years away.
Given the current state of LLMs it's much more likely they will create jobs, or change workflows in existing jobs, rather than wholesale replace humans. The recent public spectacle of Microsoft's state-of-the-art Github Copilot Agent[0] shows we're quite far away from AI agents wholesale replacing even very junior positions for knowledge work.
[0] https://news.ycombinator.com/item?id=44050152
Are they better now? When ChatGPT came out I could ask it for the song lyrics to The Beatles - Come Together. Now the response is
> Sorry, I can't provide the full lyrics to "Come Together" by The Beatles. However, I can summarize the song or discuss its meaning if you'd like.
You can argue that ChatGPT "knowing" that 9.11 is less than 9.9 or counting the 3 r's in strawberry means it's better now, but which am I more likely to ask an LLM?
Basically any job that uses a word processor, spreadsheet, drawing tool, etc will all become more efficient and if Jevons Paradox applies, demand for those things will increase beyond the reduction due to efficiency gains.
I can imagine that for many fields it will be cheaper to have humans use AI (in the near term) rather than try to make fully automated systems that require no/little supervision.
AI lack skin-in-the-game, they cannot bear responsibility for outcomes. They also don't experience desires or needs, so they depend on humans for that too.
To make an AI useful you need to apply it to a problem, in other words it is in a specific problem context that AI shows utility. Like Linux, you need to use it for something to get benefits. Providing this problem space for AI is our part. So you cannot separate AI usefulness from people, problems are distributed across society, non-fungible.
I am not very worried about jobs, we tend to prefer growth to efficiency. In a world of AI, humans will remain the differentiating factor between companies.
If AI is roughly where IT is in the 60's, we might see actually decreased productivity for a while until people (yes people) figure out how to use it effectively.
To the extent it can automate tasks under the direction of humans, it’s not even clear it makes those humans more productive, but it is clear that it harms those humans’ own skillsets (beyond prompt engineering).
Why?
If you're waiting around for AI to do these things by the time it happens it will be getting hit like a truck. The speed of technological implementation these days is very fast, especially when compared to the speed of regulation.
Moreso we are not just seeing improvement in things like LLMs, there is broad improvements in robotics and generalized behavior in AI.
For example, even outsourcing giants like Infosys shrank hiring by 20% AND increased personnel utilization from 70% to 85% just by mandating employees to start using code-gen tools, and as a result were able to significantly enhance margins.
Most of the recent (5 years) labor shocks from automation have been in white collar industries. White collar professions are those most likely to be impacted by AI/ML driven automation gains [0]
Industrial automation has already taken over skilled manufacturing jobs and HN's demographic of white collar professionals didn't complain then. Those who were impacted during that era of automation and outsourcing will not come out to the streets in solidarity with white collar employees getting impacted by Copilot or domain-specific models. Frankly, they mostly vote for Trump and HNers mostly voted for Harris.
Imo, a major undercurrent of the current culture war across the west is that it is morphing into a class war due to economic segregation leading to echo chambers in consumption [1] along with vocations [2]. As a result, White collar professionals live in an entirely different ecosystem from blue collar professionals.
No political force will fight for onshoring services jobs or incorporating barriers to reduce automation in most white collar jobs. We've already seen this realignment with skilled trades dominated unions like the UAW backing the current admin, but white collar leaning unions like AFL-CIO opposing them.
This article itself falls into the same trap of failing to recognize that HNers jobs are most exposed to AI and automation [3]. Low skill jobs have already been automated and commodified significantly over the last 25 years.
[0] - https://sms.onlinelibrary.wiley.com/doi/full/10.1002/smj.328...
[1] - https://businesslawreview.uchicago.edu/print-archive/how-did...
[2] - https://statmodeling.stat.columbia.edu/2017/12/19/red-doc-bl...
[3] = https://www.pewresearch.org/social-trends/2023/07/26/which-u...
The reason working people's jobs haven't been automated away is because people are cheaper and more conscientious than robots. Even worse, the robots are owned by famous rent-seekers who see you as prey, so defending yourself against the robots as a customer adds to the overhead of choosing them. How do you know it's not going to cost you 10x as much to fix/update them as it cost you last year? You don't, because you were locked into a scammer who unilaterally decided to raise rates on you or the entire industry, or simply got in over their head and shut down, leaving you without a vendor. Minimum wage employees don't do that.
Now asking some dipshit McKinsey consultant or some social media expert for 10 ideas to increase sales? LLMs can give you 100, summarize them, and rank them with references. They are only getting better at this. We criticize them because they only give us a list of obvious answers, but the reason we hire recent graduates is for them to give us the list of obvious answers they learned while being educated in their specialty. "Creativity" gets assigned to whoever has the social capital to be seen as a legitimate and worthy "creative," or, more accurately, the person with the money either chooses somebody to run the show who they suspect has a magic secret formula or aura, or they take on that identity themselves.
That white collar middle-class magic is gone. People who could do difficult arithmetic quickly used to make a lot of money, too.
> No political force will fight for onshoring services jobs or incorporating barriers to reduce automation in most white collar jobs.
Only people who get protection would be the ones that are unified enough to have the institutions to launder cash to give to politicians. Doctors, realtors, etc.
I disagree. I personally think SWEs (who are what most people think of as tech bros), accountants, back-office roles, and others are at the upper economic tier of society now - as wage and educational data has clearly shown.
These are the roles most at risk for automation, not because jobs will be 100% automated, but because automation now exists to reduce headcount by 20-30% and still generate the same output. This is what already happened in skilled trades over the last 40 years.
I’m learning that every topic that people read should be about AI. Also the hypothetical ones. One I barely read one month ago was some pseudo-psychological self-care piece about how to soothe ourselves as we have to deal with the inevitability of AI changing our very self-identity. They do insist too much not to raise concerns about their intentions. Oh, and very regular commenters on the Web are also very weirdly insisting that if some specific percentage of “your code” or higher is not AI then some bad stuff is about to happen to you. Also whispers about management demanding... yes their very own percentages for how much AI code output. Could tech CEOs be motivated to write about every hypothetical thing that is not a thing yet (but will be immediately surely, probably tomorrow)? Of course it’s not my place to be cynical. Everyone who writes about AI has good intentions until proven otherwise.
Now, any CEO is comfortable with writing about protests against job loss in an American context. Because they just are flashes in time.
> Bias is a little different. America did flood the streets after George Floyd in a more sustained manner, for many months, but major reforms stalled out once marches faded and partisan lines re-hardened.
Do you remember when Occupy Wallstreet changed America and put the 99% in charge? Me neither.[1] America has these heroic uprisings spread across years. Then what happens? The out of touch people in power listen a bit more attentively?
[1] In hindsight (three minutes later) that’s unfair and hyperbolic. What could have more realistically happened was that some organizing came out of it that would have survived to this day. Organizations that might have changed names and members and “leadership” (as everyone calls it now) but that anyone with a slight interest in the matter could point to and say, oh yeah that’s a product of Occupy Wallstreet, right there.
> How do we help the displaced?
Of course the chosen focus is on helping the displaced. Do they deserve handouts and from whom? But this misses the mark, right? If AI hypers like himself are correct then “helping” becomes at best outdated, outmoded. I’m sure the powers would be would love the dichotomy of helpless displaced people getting either help from the government or the “AI winners” (he uses scare quotes for reasons).[2]
But if AI job displacement becomes massive this is a dead end. Displace enough people without making new jobs and jobs themselves become outmoded. Remember that people don’t strike or protest “for jobs” because they intrinsically want a wage job... they want to live and survive and this is the means they have to do that. So what if jobs are gone, then what? Then the means of AI should be socialized. If AI displaces jobs we don’t need jobs any more. Because we can just use AI. If we own the AI... not if dozens of billionaires own them. Some CEOs might disagree on this point.
[2] The right-wing libertarians are the ones who love Universal Basic Income.
> Two historical moments that resemble this pattern: When automated textile frames wiped out skilled jobs in the U.K. in the early 1800s, the Luddite riots turned violent enough (including killing a factory owner) that Parliament dispatched roughly 12,000 troops to restore order, which concluded with over a dozen executions.
Noteworthy that he chose a violent crackdown example (sorry, restoring order) with no mention of any fruits that were won.
I have no doubt that he will report any violent occurrences of restoring order as dispassionately as he does here on this hypothetical piece.
> Do protests even matter?
> I need to dig in more, but the short answer seems to indicate yes, in a few ways. They seem make the people who attend them more politically motivated, at least for that issue. As a result, if enough people go to them, then they can swing elections.
Political scientists are funny. The kind of people who make ostensibly democratic participation into rat maze experiments.
The link just seems to be about whether or not protests can affect elections. Like they specifically and narrowly studied that. That’s a bit boring?
I don’t think the Vietnam War protests just helped impact elections. Are you kidding me? The anti-war movement up until Reagan forced that administration to make their Central America interference clandestine. Well, it was bad enough that they did it. But popular resistance was strong enough that they forced them to at least do some things differently. Imagine a repressed or apathetic society where the government just supports anti-democratic paramilitary groups in foreign countries out in the open. That’s worse in a sense.
There’s some anecdote about Kissinger and Nixon in a car where Nixon says that he’s afraid that the crowd could kill them. That’s a bit more than swinging an election.
I’m sure that Apartheid South Africa was crushed in part because of protests and boycots. Although it wasn’t on my news back then.
... But what do we get according to this article? Swinging an election. Wow. That sure will motivate Americans with their two-party system where either party serve slightly different sets of corporate sectors.
Notice these people. These people who insist that you protesting and organizing and working to change things, well it might move the needle electorally. That’s what they want you to think. That that’s all the political power you have. You can change the ballots that are presented to you. And you can persuade fellow voters. That’s it. Now go home.
We are gonna have to change more than that if the AI hypers are correct.