Ask HN: Good resources for DIY-ish animatronic kits for Halloween?
4 points by xrd 1d ago 0 comments
Why the Technological Singularity May Be a "Big Nothing"
7 points by starchild3001 1d ago 8 comments
Google's new AI mode is good, actually
125 xnx 67 9/7/2025, 2:43:07 PM simonwillison.net ↗
Also, I can't access Google AI mode because I'm in EU but when looking at the video on YT, it looks like Perplexity, but googlified. I haven't seen any other tool that comes close to Perplexity yet, I have their app installed on all my devices and it's part of my daily life, it's so good! Especially with their pro plan (I got 12-months for free)
As for AI tools, I’m in the same boat can’t access all of them where I live, but I’ve been impressed with the ones I can use. Perplexity has definitely been the standout for me too, both for quick factual lookups and for deeper dives when I don’t want to wade through endless search results. Having it synced across devices is such a game-changer; I’ll use it on my phone while commuting and then pick up the same threads on my laptop later. Honestly, I’d rather pay for something that saves me time every single day than stick with the old ways of searching.
It also seems to excel at things ChatGPT 5 Thinking isn't good at. Simple things like "Here's a screenshot of text, please transcribe it" - ChatGPT 5 Thinking will spend 2 minutes and still get the results wrong, while Gemini Pro will spend 20-30 seconds and transcribe everything perfectly.
Obviously that's just 1 use case, but as someone who previously used ChatGPT exclusively, I'm increasingly impressed by Gemini the more I use it. Mainly due to the much faster thinking times that seem to provide equal or better results than GTP 5 Thinking.
I took pictures of a book, asked Gemini to transcribe, then to translate them and I'm now in the process of having it reproduce the whole book in latex (lot of figures). Not sure exactly what I'm doing but but I've been wondering: Should I reproduce the publishing house logo? Or invent my own? Damn, this is fun.
What makes it worse is that even if you are intentional and are willing to pay, it doesn't help everywhere. Gemini as voice assistant defaults to Flash and there is no choice to change it, even though the speed of the faster model hardly matters here, while accuracy does. Same with AI suggestions everywhere, and other such "ambient" features.
The flip site of being intentional about always using SOTA models, is that you notice when you are not getting them.
Edit: https://news.ycombinator.com/item?id=45143392
I'm sure this method _did_ come under discussion in the lawsuit & settlement, but as you pointed out the settlement itself was only about pirated works.
I recall a tweet by Altman, leaking the launch of GPT-5, praising their new model's answer to a prompt about thought provoking TV shows about AI. The X thread that followed was about the form ("em-dashes are still there!") and nearly nobody cared to evaluate that neither shows recommended were about AI. They weren't, or at least, were very debatable as belonging to the genre.
It's probably the most popular AI on earth by daily queries, and likewise probably an ~8B level model, it means a whole bunch of people equate Google AI to AI overviews.
I've not seen many hallucinations, fact checking is fairly straightforward with the onward links, and it's not like I can take any linked content at face value anyway, I'd still want to fact check when it makes sense even if it wasn't AI written.
Small-time blogs were dead before AI
If people really wanted the truth and facts, we would not have misinformation spread this widely via social media and other places.
Is that a good thing? The reality is most humans are becoming more and more intellectually lazy - as a result their cognitive function are in decline. Therefore if something looks right at face value / supports an internal bias - they take it and run with it.
I do probably 40% of my searches with AI Mode now. It can't possibly be profitable (and maybe that's why it's not more discoverable), but the results are awesome.
Edit: I also tried to show my aging parents how to use it, and it was inexplicably not available on their devices. They use old (10ish year) ios devices, which is apparently incompatible even though it's a web interface.
Google answers more concisely, faster and confidently, but not convinced quality of output is better. e.g. Google pulled in info from AWS and Oracle cloud when I asked a GCP specific question. Perplexity sourced only from GCP docs
Which is an interesting outcome since I'd expect google to excel in the search aspect
You still do?
I've gotten it to identify:
- the comic from a random page of an obscure Russian comic
- obscure French comedy from a random clip
It was extra impressive because even reverse search from lens didn't immediately identity them
- it works when the info is either relatively well known or quite new
- no-AI mode now becomes dumber, the old trick to "grep" the internet with +/-/"" is gone
At least some of the Google search operators seem to still work, although Google themselves aren't very forthcoming about documenting these.
https://ahrefs.com/blog/google-advanced-search-operators/
- When I know so little about a subject that the concepts are all vague and I'm intellectually grasping in the dark. LLMs are good at taking my imprecise wording and orienting me towards the paths/gradients others have taken.
VS
- I know this subject well enough and instead of fumbling around I want to be able to run grep over a massive amount of open text data.
While the former mode is useful, the act of training without asking has led to the repopularization of walled gardens. Early Google felt like being able to grep every library book in existence and a 3 paragraph summary in response to very poorly worded questions is a terrible trade.
I know Simon is smart and he included his search terms with their warts to be open so I don't want to elicit shame over this. But cmon, just type out bought ("Anthropic but lots of physical books..."). Complete anecdote but I have noticed my LLM reliant friends have become way worse at texting and it feels like it's worth taking 5 seconds to try to structure your thoughts, simply for the practice
OpenAI searches are even better, but GPT5 is extremely slow with thinking. Without thinking it's roughly equivalent.
I'd guess it might be more of a structured/agentic approach - maybe having learnt how to map "search" strings to relevant data retrieval queries, then combining/summarizing the returned results.