Ask HN: Why hasn't x86 caught up with Apple M series?
441 points by stephenheron 4d ago 616 comments
Ask HN: Best codebases to study to learn software design?
107 points by pixelworm 6d ago 92 comments
SynthID – A tool to watermark and identify content generated through AI
87 jonbaer 74 8/30/2025, 3:29:01 AM deepmind.google ↗
> SynthID adjusts these probability scores to generate a watermark. It's not noticeable to the human eye, and doesn’t affect the quality of the output.
I think they need to be clearer about the constraints involved here. If I ask What is the capital of France? Just the answer, no extra information.” then there’s no room to vary the probability without harming the quality of the output. So clearly there is a lower bound beyond which this becomes ineffective. And presumably the longer the text, the more resilient it is to alterations. So what are the constraints?
I also think that this is self-interest dressed up as altruism. There’s always going to be generative AI that doesn’t include watermarks, so a watermarking scheme cannot tell you if something is genuine. It is, however, useful for determining that something came from a specific provider, which could be valuable to Google in all sorts of ways.
Printer tracking dots[1] is one prior solution like this; annoying, largely unknown, workarounds exist, still - surprisingly efficient.
[1]: https://en.m.wikipedia.org/wiki/Printer_tracking_dots
> Both journalists and security experts have suggested that The Intercept's handling of the leaks by whistleblower Reality Winner, which included publishing secret NSA documents unredacted and including the printer tracking dots, was used to identify Winner as the leaker, leading to her arrest in 2017 and conviction.
2. Adding watermarks may reduce the ability of an LLM, which is why I don’t think they will be widely adopted.
3. Consider this simple task: ask an LLM to repeat exactly what you said. Is the resulting text authored by you, or by the AI?
I think this technology is gonna quickly get eliminated from the marketplace, cause people aren’t willing to use AI for many common tasks that are watermarked this way. It’s ultimately gonna cause Google to lose share.
This technology has a basic use dilemma problem where widely publishing it’s ability and existence will cause your AI to stop being used in some applications
I think it’s weirder that they’re clamoring to give people tools to detect AI while clamoring to present AI-generated content as perfectly normal— no different than if the user had typed it in themselves.
To the extent watermarking technology builds trust and confidence in a product, this is a factor that moves against your prediction.
Talk is cheap. People sometimes make predictions just as easily as they generate words.
https://www.nature.com/articles/s41586-024-08025-4.pdf
https://www.nature.com/articles/s41586-024-08025-4
There's lot of room for contributions here, and I think "fingerprinting layer" is an under-valued part of the LLM stack, not being explored by enough entrants.
That sounds like a nightmare to me.
We once considered text to be generated exclusively by humans, but this assumption must be tossed out now.
I usually reject arguments based on an assumption of some status quo that somehow just continues.
Why? I’ll give two responses, which are similar but use different language.
1. There is a fallacy where people compare a future state to the present state, but this is incorrect. One has to compare two future states, because you don’t get to go back in time.
2. The “status quo” isn’t necessarily a stable equilibrium. The state of things now is not necessarily special nor guaranteed.
I’m now of the inclination to ask for a supporting model (not just one rationale) for any prediction, even ones that seem like common sense. Common sense can be a major blind spot.
Very fair point.
And no, it’s less about the status quo and more about AI being the default. There are just too many reasons why this proposal, on its face, seems problematic to me. The following are some questions to highlight just a few of them:
- How exactly would “human creators [applying] their own digital signatures to the original pieces they created” work for creators who have already passed away?
- How fair exactly would it be to impose such a requirement when large portions of the world’s creators (especially in underdeveloped areas) would likely not be able to access and use the necessary software?
- How exactly do anonymous and pseudonymous creators survive such a requirement?
Have you looked into kinds of mitigations that cryptography offers? I’m not an expert, but I would expect there are ways to balance some degree of anonymity with some degree of human identity verification.
Perhaps there are some experts out there who can comment?
I like the digital signature approach in general, and have argued for it before, but this is the weak link. For photos and video, this might be OK if there's a way to reliably distinguish "photos of real things" from "photos of AI images"; for plain text, you basically need a keystroke-authenticating keyboard on a computer with both internet access and copy and paste functionality securely disabled -- and then you still need an authenticating camera on the user the whole time to make sure they aren't just asking Gemini on their phone and typing its answer in.
The problem is becoming urgent: more and more so-called “podcasts” are entirely fake, generated by NotebookLM and pushed to every major platform purely to farm backlinks and run blackhat SEO campaigns.
Beyond SynthID or similar watermarking standards, we also need models trained specifically [0] to detect AI-generated audio. Otherwise, the damage compounds - people might waste 30 minutes listening to a meaningless AI-generated podcast, or worse, absorb and believe misleading or outright harmful information.
[0] 15,000+ ai generated fake podcasts https://www.kaggle.com/datasets/listennotes/ai-generated-fak...
Also, if everything in the future has some touch of AI inside, for example cameras using AI to slightly improve the perceived picture quality, then "made with AI" won't be a categorization that anybody lifts an eyebrow about.
If the problem is "kids are using AI to cheat on their schoolwork and it's bad PR / politicians want us to do something" then competitors' models aren't your problem.
On the other hand, if the problem is "social media is flooded with undetectable, super-realistic bots pushing zany, divisive political opinions, we need to save the free world from our own creation" then yes, your competitors' models very much are part of the problem too.
Additionally, I don’t think the watermark has to be deterministic.
Almost all the big hosted AI providers are publicly working on watermarking for at least media (text is more of a mixed bag); ultimately, its probably a regulatory play—the big providers expect that the combination of legitimate concerns and their own active fearmongering, combined with them demonstrating watermarking, will result in mandates for commercial AI generation services to include watermarking. This may even be part of the regulatory play to restrict availability and non-research use of open models.
Sure in some cases a model might do some astounding things that always shine through, but I guess the jury still out on these questions.
There is a kind of arms race that has existed for a while for non-watermarked content, except that the detection tools are pretty much Magic 8-ball level of reliability, so there's not a lot of effort on the counter-detection side.
And if I scan the image or take a picture of the image on display.
It is easy to alter by just saving to a different format or basic cropping.
I would love to see how SynthID is fixing this issue.
https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
AI watermarking is adversarial, and anyone who generates a watermarked output either doesn't care, or wants the watermarked removed.
C2PA is cooperative: publishers want the signatures intact, so that the audience has trust in the publisher.
By "adversarial" and "cooperative", I mean in relation to the primary content distributor. There's an adversarial aspect to C2PA, too: bad actors want leaked keys so they can produce fake video and images with metadata attesting that they're real.
A lot of people have a large incentive to disrupt the AI watermark. Leaked C2PA keys will be a problem, but probably a minor one. C2PA is merely an additional assurance, beyond the reputation and representation of the publishing entity, of the origin of a piece of media.
I'd love to see the data behind this claim, especially on the audio side.