Ask HN: Why hasn't x86 caught up with Apple M series?
415 points by stephenheron 1d ago 594 comments
iOS 26 Launches Sept 15 – Even GPT-5 Doesn't Know It Exists
2 points by rileygersh 3h ago 5 comments
Ask HN: Is there a temp phone number like temp email?
9 points by piratesAndSons 23h ago 11 comments
Stop squashing your commits. You're squashing your AI too
4 points by jannesblobel 1d ago 9 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 3d ago 89 comments
The GitHub website is slow on Safari
233 talboren 187 8/27/2025, 9:43:43 AM github.com ↗
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
Their "solution" was to enable SSR for us ranters' accounts.
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
if they were forced to use slow machines, they would not be able to put out crap like that
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
network.http.referer.XOriginPolicy = 1
I’m sure you could make something work better as a SPA, but nobody does.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
Available data confirms that SPA tends to perform worse than classic SSR.
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
https://we.phorge.it/
I assume this is fallout from dealing with LLM content scrapers.
https://we.phorge.it/phame/post/view/8/anonymous_cloning_dis... https://we.phorge.it/phame/post/view/9/anonymous_cloning_has...
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
At the very least, I wish they set it to auto.
Never had any issues with it.
The page the person on the issue had loading for 10s, takes almost 2s here.
No comments yet
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
It's just easier to blame the tools (or companies!) you already hate.
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
I would rather just see the steps you ran to generate the diff and review that instead.
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
In case you're one of today's lucky 10,000, OpenCore Legacy Patcher supports Macs going to back as far as 2007: https://github.com/dortania/OpenCore-Legacy-Patcher
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
Depending on where you live (or what websites you visit) it's not unreasonable.
I'd put it on the end user for not updating software on 15 y/o hardware and still expecting the outside world to interact cleanly.
That's probably true.
> 15 y/0
It's a matter of expectations, many laptops that old still work decently enough with a refreshed battery. Funnily enough win10 was released 15 ago, and one can still get support for it for at least another 3 years until 2028, even on the customer license.
should they be locking safari to the OS, definitely not. but users can just go download another browser if they are actually concerned.
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
Todays version is: "You will get fired unless you use React".
So every site now uses React no matter if the end result is a dog slow Github.
Bad developers looks at "what are everybody else using?".
Good developers looks at "what is the best and simplest (KISS) tool for this?"
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
Is it your theory that working on large projects was better when you had communist masters? That seems inconsistent with everything we know, e.g. quotas enforced my mass murder.
My guess is that it's more about organizations (your first paragraph) and less about capitalism (your last paragraph).
For instance, the GP could be a proponent of self-management, and the statement would be coherent (an indictment of leaders within capitalism) without supposing anything about communism.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
https://t3.chat/
The problem isn't React. The problem are KPIs and unrealistic timeline. It is the same then ever. Not a fault of React at all.
What about Slack, the messenger?
Umm, Discord? SoundCloud? Trello? Bandcamp? Spotify?
If I keep going there are actually hundreds and thousands of well-made react apps.
As you point out it's wildly successful and is the backbone of many groups internal communication. Many companies would just stop working without Slack, that's a testament to the current team's efforts, but also something that critical would merit better perfs.
I'd make the comparison with Figma, which went the extra mile to bring a level of smoothness that just wouldn't be there otherwise.
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
Does anyone have concrete information?
[1]: https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
[2]: https://news.ycombinator.com/item?id=44799861
https://chromewebstore.google.com/detail/make-github-great-a...
I see loading spanner everywhere and even the page transition take ages compared to before.
I am not sure what metric they are using justify ditching the perfectly working SSR they used before.
If you actually load up a ~2015 version of Jira on today’s hardware it’s basically instant.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
It's really hard to fight the trend especially in larger orgs.
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
> publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
On random site, Navigate to GitHub repo, navigate to file in repo, and hit back, and I'm on the random site, hit forward and I'm on the file.
So annoying.
One of a large handful of issues I've encountered post react conversion
Slow as hell and the Safari search function stopped working. I loaded the same url on Firefox and it was insta-fast.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
What a time to be alive.
The solution is a test that fails when Chrome and Safari have substantially different render times.
Good to know others are feeling it too, hopefully it can get resolved soon. In the mean time, i'll try my PR reviews on FF.
Update: Just tested my big PR (+8,661, -1,657) on FF and it worked like a charm!
it's Microsoft, so the answer is: buy a new computer
(which comes with a bundled Windows license)
You really can't escape the enshittification.
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.