Ask HN: Why hasn't x86 caught up with Apple M series?
450 points by stephenheron 7d ago 620 comments
Ask HN: Did Developers Undermine Their Own Profession?
8 points by rayanboulares 15h ago 16 comments
Google AI Overview made up an elaborate story about me
526 jsheard 219 9/1/2025, 2:27:17 PM bsky.app ↗
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.
Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.
https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.
probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.
(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)
People make judgments about people based on second hand information. That is just how people work.
Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.
yes, i literally do think that, so there are no odds.
No comments yet
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
[0] https://www.youtube.com/watch?v=qgUzVZiint0
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
Good thing I know aunt Sally is a pathological liar and strawberry cake addict, and anyone who says otherwise is a big fat fake.
You either try hard to tell the objective truth or you bend the truth routinely to try to make a "larger" point. The more you do the latter the less credit people will give your word.
in googlis non est, ergo non est
which sums very well how people are super biased to believe the search results.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
Its literally bending languages into american with other words.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
No comments yet
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
It doesn't even cover non-renewable resources, or state that the window intact is a form of wealth on its own!
I'm not naive, I'm sure thousands have made these arguments before me. I do think intact windows are good. I'm just surprised that particular framing is the one that became the standard
We have them on the record in multiple lawsuits stating that they did exactly this.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
"AI responses may include mistakes. Learn more"
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
Apart from that, it is also true that a lot of people here aren't Americans (hello from Australia). I know this is a US-hosted forum, but it is interesting to observe the divide between Americans who speak as if everyone else here is an American (e.g. "half the country") and those who realise many of us aren't
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
So all google had to do was reword their disclaimer differently?
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
No comments yet
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
No, the ask here is that companies be liable for the harm that their services bring
Is it? Or can it be just reckless, without any regard for the truth?
Can I create a slander AI that simply makes up stories about random individuals and publicizes them, not because I'm trying to hurt people (I don't know them), but because I think it's funny and I don't care about people?
Is the only thing that determines my guilt or innocence when I hurt someone my private, unverifiable mental state? If so, doesn't that give carte blanche to selective enforcement?
I know for a fact this is true in some places, especially the UK (at least since the last time I checked), where the truth is not a defense. If you intend to hurt a quack doctor in the UK by publicizing the evidence that he is a quack doctor, you can be convicted for consciously intending to destroy his fraudulent career, and owe him compensation.
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.
(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.
Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.
Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.
It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
[1] https://maxirwin.com/articles/interleaving-rag/
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
If anything this scenario makes the hiring process more transparent.
Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?
AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.
Companies already, today, never give you even an inkling of the reason why they didn't hire you.
"Section 230 of the Communications Decency Act, which grants immunity to platforms for content created by third parties. This means Google is not considered the publisher of the content it indexes and displays, making it difficult to hold the company liable for defamatory statements found in search results"
LLMs are the Synthetic CDO of knowledge.
Dave Barry is pretty much A-list famous.
Very bizarre that Benn Jordan somehow got roped into it.
I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.
Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
No comments yet
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
Integrity is dead. Reliable journalism is dead.
https://theconversation.com/why-microsofts-copilot-ai-falsel...
Definitely not the last.
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
https://link.springer.com/article/10.1007/s10676-024-09775-5
His posts are mostly political rage bait and he actively tries to data poison AI.
He also claims that Hitler compares favorably to Trump. Given his seeming desire to let us all know how much he dislikes Israel, that's a pretty... interesting... claim.
Just because he's an unreliable source doesn't mean his story is false. But it would be nice to have confirmation before taking it seriously.
It is called HASBARA. => Telling lies to improve their image. It is an official government agency.
We do not blame computer programs when they have bugs or make mistakes - we blame the human being who made them.
This has always been the case since we have created anything, dating back even tens of thousands of years. You absolutely cannot just unilaterally decide to change that now based on a whim.
I mean, no, I don’t think some Google employee tuned the LLM to produce output like this, but it doesn’t matter. They are still responsible.
No comments yet
There literally isn't room for them to know everything about everyone when they're just asked about random people without consulting sources, and even when consulting sources it's still pretty easy for them to come in with extremely wrong priors. The world is very large.
You have to be very careful about these "on the edge" sorts of queries, it's where the hallucination will be maximized.
Not sure where you’re getting the 45Gb number.
Also, Google doesn’t use GPT-4 for summaries. They use a custom version of their Gemini model family.
Gemma 3 270M was trained on 6 trillion tokens but can be loaded into a few hundred million bytes of memory.
But yeah GPT-4 is certainly way bigger than 45GB.