Ask HN: Good resources for DIY-ish animatronic kits for Halloween?
4 points by xrd 1d ago 0 comments
Why the Technological Singularity May Be a "Big Nothing"
7 points by starchild3001 1d ago 8 comments
Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations
190 qwytw 238 9/7/2025, 10:06:35 PM guru3d.com ↗
>Overall the Intel Arc Pro B50 was at 1.47x the performance of the NVIDIA RTX A1000 with that mix of OpenGL, Vulkan, and OpenCL/Vulkan compute workloads both synthetic and real-world tests. That is just under Intel's own reported Windows figures of the Arc Pro B50 delivering 1.6x the performance of the RTX A1000 for graphics and 1.7x the performance of the A1000 for AI inference. This is all the more impressive when considering the Arc Pro B50 price of $349+ compared to the NVIDIA RTX A1000 at $420+.
I guess it's a boon for Intel that NVidia repeatedly shoots their own workstation GPUs in the foot...
I.e. maybe Nvidia say "if we're going to fuse some random number of cores such that this is no longer a 3050, then let's not only fuse the damaged cores, but also do a long burn-in pass to observe TDP, and then fuse the top 10% of cores by measured TDP."
If they did that, it would mean that the resulting processor would be much more stable under a high duty cycle load, and so likely to last much longer in an inference-cluster deploy environment.
And the extra effort (= bottlenecking their supply of this model at the QC step) would at least partially justify the added cost. Since there'd really be no other way to produce a card with as many FLOPS/watt-dollar, without doing this expensive "make the chip so tiny it's beyond the state-of-the-art to make it stably, then analyze it long enough to precision-disable everything required to fully stabilize it for long-term operation" approach.
Such an appliance could plug into literally any modern computer — even a laptop or NUC. (And for inference, "running on an eGPU connected via Thunderbolt to a laptop" would actually work quite well; inference doesn't require much CPU, nor have tight latency constraints on the CPU<->GPU path; you mostly just need enough arbitrary-latency RAM<->VRAM DMA bandwidth to stream the model weights.)
(And yeah, maybe your workstation doesn't have Thunderbolt, because motherboard vendors are lame — but then you just need a Thunderbolt PCIe card, which is guaranteed to fit more easily into your workstation chassis than a GPU would!)
https://www.gigabyte.com/Graphics-Card/GV-N5090IXEB-32GD
The thing you linked is just a regular Gigabyte-branded 5090 PCIe GPU card (that they produced first, for other purposes; and which does fit into a regular x16 PCIe slot in a standard ATX chassis), put into a (later-designed) custom eGPU enclosure. The eGPU box has some custom cooling [that replaces the card's usual cooling] and a nice little PSU — but this is not any more "designing the card around the idea it'll be used in an enclosure" than what you'd see if an aftermarket eGPU integrator built the same thing.
My point was rather that, if an OEM [that produces GPU cards] were to design one of their GPU cards specifically and only to be shipped inside an eGPU enclosure that was designed together with it — then you would probably get higher perf, with better thermals, at a better price(!), than you can get today from just buying standalone peripheral-card GPU (even with the cost of the eGPU enclosure and the rest of its components taken into account!)
Where by "designing the card and the enclosure together", that would look like:
- the card being this weird nonstandard-form-factor non-card-edged thing that won't fit into an ATX chassis or plug into a PCIe slot — its only means of computer connection would be via its Thunderbolt controller
- the eGPU chassis the card ships in, being the only chassis it'll comfortably live in
- the card being shaped less like a peripheral card and more like a motherboard, like the ones you see in embedded industrial GPU-SoC [e.g. automotive LiDAR] use-cases — spreading out the hottest components to ensure nothing blocks anything else in the airflow path
- the card/board being designed to expose additional water-cooling zones — where these zones would be pointless to expose on a peripheral card, as they'd be e.g. on the back of the card, where the required cooling block would jam up against the next card in the slot-array
...and so on.
It's the same logic that explains why those factory-sealed Samsung T-series external NVMe pucks can cost less than the equivalent amount of internal m.2 NVMe. With m.2 NVMe, you're not just forced into a specific form-factor (which may not be electrically or thermally optimal), but you're also constrained to a lowest-common-denominator assumption of deployment environment in terms of cooling — and yet you have to ensure that your chips stay stable in that environment over the long term. Which may require more-expensive chips, longer QC burn-in periods, etc.
But when you're shipping an appliance, the engineering tolerances are the tolerances of the board-and-chassis together. If the chassis of your little puck guarantees some level of cooling/heat-sinking, then you can cheap out on chips without increasing the RMA rate. And so on. This can (and often does) result in an overall-cheaper product, despite that product being an entire appliance vs. a bare component!
With 16GB everybody will just call it another in the long list of Intel failures.
My first software job was at a place doing municipal architecture. The modelers had and needed high end GPUs in addition to the render farm, but plenty of roles at the company simply needed anything with better than what the Intel integrated graphics of the time could produce in order to open the large detailed models.
In these roles the types of work would include things like seeing where every pipe, wire, and plenum for a specific utility or service was in order to plan work between a central plant and a specific room. Stuff like that doesn’t need high amounts of VRAM since streaming textures in worked fine. A little lag never hurt anyone here as the software would simply drop detail until it caught up. Everything was pre-rendered so it didn’t need large amounts of power to display things. What did matter was having the grunt to handle a lot of content and do it across three to six displays.
Today I’m guessing the integrated chips could handle it fine but even my 13900K’s GPU only does DisplayPort 1.4 and up to only three displays on my motherboard. It should do four but it’s up to the ODMs at that point.
For a while Matrox owned a great big slice of this space but eventually everyone fell to the wayside except NVidia and AMD.
This makes it mysterious since clearly CUDA is an advantage, but higher VRAM lower cost cards with decent open library support would be compelling.
Are there any performance bottlenecks with using 2 cards instead of a single card? I don't think any one the consumer Nvidia cards use NVlink anymore, or at least they haven't for a while now.
Plenty of people use eg 2, 4 or 6 3090s to run large models at acceptable speeds.
Higher VRAM at decent (much faster than DDR5) speeds will make cards better for AI.
But intel is still lost in it's hubris, and still thinks it's a serious player and "one of the boys", so it doesn't seem like they want to break the line.
Given the high demand of graphic cards, is this a plausible scenario?
Given how young and volatile this domain still is, it doesn't seem unreasonable to be wary of it. Big players (google, openai and the likes) are probably pouring tons of money into trying to do exactly that
- less people care about VRAM than HN commenters give impression of
- VRAM is expensive and wouldn't make such cards profitable at the HN desired price points
I don't get why there's people trying to twist this story or come up with strawmen like the A2000 or even the RTX5000 series. Intel's coming into this market competitively, which as far as I know is a first, and it's also impressive.
Coming into the gaming GPU market had always been too ambitious a goal for Intel, they should have started with competing in the professional GPU market. It's well known that Nvidia and AMD have always been price gouging this market so it's fairly easy to enter it competitively.
If they can enter this market successfully and then work their way up on the food chain then that seems like good way to recover from their initial fiasco.
Toss in a 5060 Ti into the compare table, and we're in an entirely different playing field.
There are reasons to buy the workstation NVidia cards over the consumer ones, but those mostly go away when looking at something like the new Intel. Unless one is in an exceptionally power-constrained environment, yet has room for a full-sized card (not SFF or laptop), I can't see a time the B50 would even be in the running against a 5060 Ti, 4060 Ti, or even 3060 Ti.
I seem to recall certain esoteric OpenGL things like lines being fast was a NVIDIA marketing differentiator, as only certain CAD packages or similar cared about that. Is this still the case, or has that software segment moved on now?
For me (not quite at the A1000 level, but just above -- still in the prosumer price range), a major one is ECC.
Thermals and size are a bit better too, but I don't see that as $500 better. I actually don't see (m)any meaningful reasons to step up to an Ax000 series if you don't need ECC, but I'd love to hear otherwise.
We could just as well compare it to the slightly more capable RTX A2000, which was released more than 4 years ago. Either way, Intel is competing with the EoL Ampere architecture.
There are huge markets that does not care about SOTA performance metrics but needs to get a job done.
That's a bold claim when their acceleration software (IPEX) is barely maintained and incompatible with most inference stacks, and their Vulkan driver is far behind it in performance.
At 16GB I'd still prefer to pay a premium for NVidia GPUs given its superior ecosystem, I really want to get off NVidia but Intel/AMD isn't giving me any reason to.
PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use, we don’t really get Crysis style system crushers anymore.
This isn't really true from the recreational card side, nVidia themselves are reducing the number of 8GB models as a sign of market demand [1]. Games these days are regularly maxing out 6 & 8 GB when running anything above 1080p for 60fps.
The prevalence of Unreal Engine 5 also recently with a low quality of optimization for weaker hardware is causing games to be released basically unplayable for most.
For recreational use the sentiment is that 8GB is scraping the bottom of the requirements. Again this is partly due to bad optimizations, but games are being played in higher resolutions also, which required more memory for larger texture sizes.
[1] https://videocardz.com/newz/nvidia-reportedly-reduces-supply...
While I dislike some of the handmade hero culture, in one thing they are right, regarding how bad modern hardware happens to be used.
A good thing is that I turned myself into libre/indie gaming with games such as Cataclysm DDA:Bright Ness with far less requeriments than a UE5 game and yet being enyojable due to playability and in-game lore (and a proper ending compared to vanilla CDDA).
I know this is going back to Intel's Larrabee where they tried it, but I'd be real interested to see what the limits of a software renderer is now considering the comparative strength of modern processors and amount of multiprocessing. While I know there's DXVK or projects like dgVoodoo2 which can be an option with sometimes better backwards compatibility, just software would seem like a stable reference target than the gradually shifting landscape of GPUs/drivers/APIs
Lavapipe on Vulkan makes VKQuake playable even con Core Duo 2 systems. Just as a concept, of course. I know about software rendered Quakes since forever.
Never got far enough to interact with most systems in the game or worry about proper endings.
The Unreal Engine software renderer back then had a very distinct dithering pattern. I played it after I got a proper 3D card, but it didn't feel the same, felt very flat and lifeless.
500$ 32GB consumer GPU is an obvious best seller.
Thus let's call it how it is: they don't want to cannibalize their higher end GPUs.
Here is a better idea what to do with all that money,
https://themarcooffset.com/
We're already seeing competitors of AWS but only targeting things like Qwen , deepseek, etc.
There's Enterprise customers who have compliance laws and literally want AI but cannot use any of the top models because everything has to be run on their own infrastructure.
That's pretty funny considering that PC games are moving more towards 32GB RAM and 8GB+ VRAM. The next generation of consoles will of course increase to make room for higher quality assets.
You're wrong. It's probably more like 9 HN posters.
They also announced a 24 GB B60 and a double-GPU version of the same (saves you physical slots), but it seems like they don't have a release date yet (?).
https://www.asrock.com/Graphics-Card/Intel/Intel%20Arc%20Pro...
This to me is the gamer perspective. This segment really does not need even 32GB, let alone 64GB or more.
The only time usage was "high" was when I created a VM with 48GB RAM just for kicks.
It was useless. But I could say I had 64GB RAM.
If you hae a private computer, why wpuld you even bug something with 16GB in 2025? My 10 year old laptop had that much.
Im looking for a new laptop and Im looking at a 128GB setup - so those 200 chrome tabs can eat it, I have space to run other stuff, like those horrible electron chat apps + a game
How so? The prosumer local AI market is quite large and growing every day, and is much more lucrative per capita than the gamer market.
Gamers are an afterthought for GPU manufacturers. NVIDIA has been neglecting the segment for years, and is now much more focused on enterprise and AI workloads. Gamers get marginal performance bumps each generation, and side effect benefits from their AI R&D (DLSS, etc.). The exorbitant prices and performance per dollar are clear indications of this. It's plain extortion, and the worst part is that gamers accepted that paying $1000+ for a GPU is perfectly reasonable.
> This segment really does not need even 32GB, let alone 64GB or more.
4K is becoming a standard resolution, and 16GB is not enough for it. 24GB should be the minimum, and 32GB for some headroom. While it's true that 64GB is overkill for gaming, it would be nice if that would be accessible at reasonable prices. After all, GPUs are not exclusively for gaming, and we might want to run other workloads on them from time to time.
While I can imagine that VRAM manufacturing costs are much higher than DRAM costs, it's not unreasonable to conclude that NVIDIA, possibly in cahoots with AMD, has been artificially controlling the prices. While hardware has always become cheaper and more powerful over time, for some reason, GPUs buck that trend, and old GPUs somehow appreciate over time. Weird, huh. This can't be explained away as post-pandemic tax and chip shortages anymore.
Frankly, I would like some government body to investigate this industry, assuming they haven't been bought out yet. Label me a conspiracy theorist if you wish, but there is precedent for this behavior in many industries.
During the cryptocurrency hype, GPUs were already going for insane prices and together with low energy prices or surplus (which solar can cause, but nuclear should too) allows even governments to make cheap money (and for hashcat cracking, too). If I was North Korea I'd know my target. Turns out, they did, but in a different way. That was around 2014. Add on top of this Stadia and GeForce Now as examples of renting GPU for gaming (there are more, and Stadia flopped).
I didn't mention LLMs since that has been the most recent development.
All in all, it turns out GPUs are more valuable than what they were sold for if your goal isn't personal computer gaming. Hence the price gone up.
Now, if you want to thoroughly investigate this market you need to figure what large foreign forces (governments, businesses, and criminal enterprises) use these GPUs for. US government is aware for long time of above; hence export restrictions on GPUs. Which are meant as slowing opponent down to catch up. The opponent is the non-free world (China, North Korea, Russia, Iran, ...), though current administration is acting insane.
NVIDIA is also taking consumers for a ride by marketing performance based on frame generation, while trying to downplay and straight up silence anyone who points out that their flagship cards still struggle to deliver a steady 4K@60 without it. Their attempts to control the narrative of media outlets like Gamers Nexus should be illegal, and fined appropriately. Why we haven't seen class-action lawsuits for this in multiple jurisdictions is beyond me.
Their GPU business is a slow upstart. If they have a play that could massively disrupt the competition, and has a small chance of epic failure, that should be very attractive to them.
The number of chips on the bus is usually pretty low (1 or 2 of them on most GPUs), so GPUs tend to have to scale out their memory bus widths to get to higher capacity. That's expensive and takes up die space, and for the conventional case (games) isn't generally needed on low end cards.
What really needs to happen is someone needs to make some "system seller" game that is incredibly popular and requires like 48GB of memory on the GPU to build demand. But then you have a chicken/egg problem.
Example: https://wccftech.com/nvidia-geforce-rtx-5090-128-gb-memory-g...
You can get 96gb of vram and about 40-70% the speed of a 4090 for $4000.
Especially when you are running a large number of applications you want to talk to each other it makes sense ... the only way to do it on a 4090 is to hit disk, shut the application down, start up the other applciation, read from disk ... it's slowwww... the other option is a multi-gpu system but then it gets into real money.
trust me, it's a gamechanger. I just have it sitting in a closet. Use it all the time.
The other nice thing is unlike with any Nvidia product, you can walk into an apple store, pay the retail price and get it right away. No scalpers, no hunting.
Why not just buy 3 card then? These cards doesn't require active cooling anyways and you can just fit 3 in decent sized case. You will get 3x VRAM speed and 3x compute. And if your usecase is llm inference, it will be a lot faster than 1x card with 3x VRAM.
> 3x VRAM speed and 3x compute
LLM scaling doesn’t work this way. If you have 4 cards, you may get 2x performance increase if you use vLLM. But you’ll also need enough VRAM to run FP8. 3 cards would only run at 1x performance.
Currently running them different VMs to be able to make full use of them, used to have them running in different docker containers however OOM Exceptions would frequently bring down the whole server, which running in VMs helped resolve.
[1]: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...
Which means Intel or AMD making an affordable high-VRAM card is win-win. If Nvidia responds in kind, Nvidia loses a ton of revenue they'd otherwise have available to outspend their smaller competitors on R&D. If they don't, they keep more of those high-margin customers but now the ones who switch to consumer cards are switching to Intel or AMD, which both makes the company who offers it money and helps grow the ecosystem that isn't tied to CUDA.
People say things like "it would require higher pin counts" but that's boring. The increase in the amount people would be willing to pay for a card with more VRAM is unambiguously more than the increase in the manufacturing cost.
It's more plausible that there could actually be global supply constraints in the manufacture of GDDR, but if that's the case then just use ordinary DDR5 and a wider bus. That's what Apple does and it's fine, and it may even cost less in pins than you save because DDR is cheaper than GDDR.
It's not clear what they're thinking by not offering this.
100% agree. CUDA is a bit of a moat, but the earlier in the hype cycle viable alternatives appear, the more likely the non CUDA ecosystem becomes viable.
> It's not clear what they're thinking by not offering this.
They either dont like making money or have a fantasy that one day soon they will be able to sell pallets of $100,000 GPUs they made for $2.50 like Nvidia can. It doesn't take a PhD and MBA to figure out that the only reason Nvidia have, what should be a short term market available to them is the failings of Intel and AMD and the VC / Innovation side to offer any competition.
It is such an obvious win-win that it would probably be worth skipping the engineering and just announcing the product, for sale by the end of the year and force everyones hand.
I guess you already have the paper if it is that unambiguous. Would you mond sharing the data/source?
I think your argument is still true overall, though, since there are a lot of "gpu poors" (i.e. grad students) who write/invent in the CUDA ecosystem, and they often work in single card settings.
Fwiw Intel did try this with Arctic Sound / Ponte Vecchio, but it was late out the door and did not really perform (see https://chipsandcheese.com/p/intels-ponte-vecchio-chiplets-g...). It seems like they took on a lot of technical risk; hopefully some of that transfers over to a future project though Falcon Shores was cancelled. They really should should have released some of those chips even at a loss, but I don't know the cost of a tape out.
AMD has lagged so long because of the software ecosystem but the climate now is that they'd only need to support a couple popular model architectures to immediately grab a lot of business. The failure to do so is inexplicable.
I expect we will eventually learn that this was about yet another instance of anti-competitive collusion.
Lisa and Jensen are cousins. I think that explains it. Lisa can easily prove me wrong by releasing a high-memory GPU that significantly undercuts Nvidia's RTX 6000 Pro.
1. https://youtu.be/iM58i3prTIU?si=JnErLQSHpxU-DlPP&t=225
2. https://www.intel.com/content/www/us/en/developer/articles/t...
Why would you bother with any Intel product with an attitude like that, gives zero confidence in the company. What business is Intel in, if not competing with Nvidia and AMD. Is it giving up competing with AMD too?
That's the correct call in my opinion. Training is far more complex and will span multi data centers soon. Intel is too far behind. Inference is much simpler and likely a bigger market going forward.
That's how you get things like good software support in AI frameworks.
Inference is vastly simpler than training or scientific compute.
In many cases where 32GB won't be enough, 48 wouldn't be enough either.
Oh and the 5090 is cheaper.
Foundry business. The latest report on Discreet Graphics Market share Nvidia has 94%, AMD at 6% and Intel at 0%.
I may still have another 12 months to go. But in 2016 I made a bet against Intel engineers on Twitter and offline suggesting GPU is not a business they want to be in, or at least too late. They said at the time they will get 20% market share minimum by 2021. I said I would be happy if they did even 20% by 2026.
Intel is also losing money, they need cashflow to compete in Foundry business. I have long argued they should have cut off GPU segment when Pat Gelsinger arrives, turns out Intel bound themselves to GPU by all the government contract and supercomputer they promised to make. Now that they have delivered it all or mostly they will need to think about whether to continue or not.
Unfortunately unless US point guns at TSMC I just dont see how Intel will be able to compete, as Intel needs to be a leading edge position in order to command the margin required for Intel to function. Right now in terms of density Intel 18A is closer to TSMC N3 then N2.
If NVidia gets complacent as Intel has become when they had the market share in the CPU space, there is opportunity for Intel, AMD and others in NVidias margin.
They may not have to, frankly, depending on when China decides to move on Taiwan. It's useless to speculate—but it was certainly a hell of a gamble to open a SOTA (or close to it—4 nm is nothing to sneeze at) fab outside of the island.
I want hardware that I can afford and own, not AI/datacenter crap that is useless to me.
I like to Buy American when I can but it's hard to find out which fabs various CPUs and GPUs are made in. I read Kingston does some RAM here and Crucial some SSDs. Maybe the silicon is fabbed here but everything I found is "assembled in Taiwan", which made me feel like I should get my dream machine sooner rather than later
I have a service that runs continuously and reencodes any videos I have into h265 and the iGPU barely even notices it.
I'll have to consider pros and cons with Ultra chips, thanks for the tip.
Apologies for the video link. But a recent pretty in depth comparison: https://youtu.be/kkf7q4L5xl8
There really is no such thing as "buying American" in the computer hardware industry unless you are talking about the designs rather than the assembly. There are also critical parts of the lithography process that depend on US technology, which is why the US is able to enforce certain sanctions (and due to some alliances with other countries that own the other parts of the process).
Personally I think people get way too worked up about being protectionist when it comes to global trade. We all want to buy our own country's products over others but we definitely wouldn't like it if other countries stopped buying our exported products.
When Apple sells an iPhone in China (and they sure buy a lot of them), Apple is making most of the money in that transaction by a large margin, and in turn so are you since your 401k is probably full of Apple stock, and so are the 60+% of Americans who invest in the stock market. A typical iPhone user will give Apple more money in profit from services than the profit from the sale of the actual device. The value is really not in the hardware assembly.
In the case of electronics products like this, almost the entire value add is in the design of the chip and the software that is running on it, which represents all the high-wage work, and a whole lot of that labor in the US.
US citizens really shouldn't envy a job where people are sitting at an electronics bench doing repetitive assembly work for 12 hours a day in a factory wishing we had more of those jobs in our country. They should instead be focused on making high level education more available/affordable so that they stay on top of the economic food chain, where most/all of its citizens are doing high-value work rather than causing education to be expensive and beg foreign manufacturers to open satellite factories to employ our uneducated masses.
I think the current wave of populist protectionist ideology is essentially blaming the wrong causes of declining affordability and increasing inequality for the working class. Essentially, people think that bringing the manufacturing jobs back and reversing globalism will right the ship on income inequality, but the reality is that the reason that equality was so good for Americans m in the mid-century was because the wealthy were taxed heavily, European manufacturing was decimated in WW2, and labor was in high demand.
The above of course is all my opinion on the situation, and a rather long tangent.
EDIT: I did think of, what is the closest thing to artisan silicon and thought of the POWER9 CPUs and found out those are made in USA Talos II is also manufactured in the US with the IBM POWER9 processors being fabbed in New York while the Raptor motherboard is manufactured in Texas along with where their systems are assembled.
https://www.phoronix.com/review/power9-threadripper-core9
I randomly thought of paint companies as another example, with Sherwin-Williams and PPG having US plants.
The US is still the #2 manufacturer in the world, it's just a little less obvious in a lot of consumer-visible categories.
Also, do these support SR-IOV, as in handing slices of the GPU to virtual machines?
Is HDMI seen as a “gaming” feature, or is DP seen as a “workstation” interface? Ultimately HDMI is a brand that commands higher royalties than DP, so I suspect this decision was largely chosen to minimize costs. I wonder what percentage of the target audience has HDMI only displays.
Converting from DisplayPort to HDMI is trivial with a cheap adapter if necessary.
HDMI is mostly used on TVs and older monitors now.
Only now are DisplayPort 2 monitors coming out
Not cheap though. And also not 100% caveat-free.
I have a Level1Techs hdmi KVM and it's awesome, and I'd totally buy a display port one once it has built in EDID cloners, but even at their super premium price point, it's just not something they're willing to do yet.
[0] https://www.amazon.co.uk/ASUS-GT730-4H-SL-2GD5-GeForce-multi...
https://www.theregister.com/2024/03/02/hdmi_blocks_amd_foss/
(Note that some self-described “open” standards are not royalty-free, only RAND-licensed by somebody’s definiton of “R” and “ND”. And some don’t have their text available free of charge, either, let alone have a development process open to all comers. I believe the only thing the phrase “open standard” reliably implies at this point is that access to the text does not require signing an NDA.
DisplayPort in particular is royalty-free—although of course with patents you can never really know—while legal access to the text is gated[2] behind a VESA membership with dues based on the company revenue—I can’t find the official formula, but Wikipedia claims $5k/yr minimum.)
[1] https://hackaday.com/2023/07/11/displayport-a-better-video-i...
[2] https://vesa.org/vesa-standards/
As someone who has toyed with OS development, including a working NVMe driver, that's not to be underestimated. I mean, it's an absurd idea, graphics is insanely complex. But documentation makes it theoretically possible... a simple framebuffer and 2d acceleration for each screen might be genuinely doable.
https://www.x.org/docs/intel/ACM/
I assume you have to pay HDMI royalties for DP ports which support the full HDMI spec, but older HDMI versions were supersets of DVI, so you can encode a basic HDMI compatible signal without stepping on their IP.
Otherwise HDMI would have been dead a long time ago.
No comments yet
> Is HDMI seen as a “gaming” feature
It's a tv content protection feature. Sometimes it degrades the signal so you feel like you're watching tv. I've had this monitor/machine combination that identified my monitor as a tv over hdmi and switched to ycbcr just because it wanted to, with assorted color bleed on red text.
It's not competing with amd/nvidia at twice the price on terms of performance, but it's also too expensive for a cheap gaming rig. And then there are people who are happy with integrated graphics.
Maybe I'm just lacking imagination here, I don't do anything fancy on my work and couch laptops and I have a proper gaming PC.
With SR-IOV* there is a low cost path for GPU in virtual machines. Until now this has (mostly) been a feature exclusive to costly "enterprise" GPUs. Combine that with the good encoders and some VDI software and you have VM hosted GPU accelerated 3D graphics to remote displays. There are many business use cases for this, and no small number of "home lab" use cases as well.
Linux is a first class citizen with Intel's display products, and B50/60 is no different, so it's a nice choice when you want a GPU accelerated Linux desktop with minimum BS. Given the low cost and power, it could find its way into Steam consoles as well.
Finally, Intel is the scrappy competitor in this space: they are being very liberal with third parties and their designs, unlike the incumbents. We're already seeing this with Maxsun and others.
* Intel has promised this for B50/60 in Q4
Therefore I can install Proxmox VE and run multiple VMs, assigning a vGPU to each of them a for video transcoding (IPCam NVR), AI and other applications.
https://github.com/Upinel/PVE-Intel-vGPU
All current Intel Flex cards seem to be based on the previous gen "Xe".
(A half-height single-slot card would be even smaller, but those are vanishingly rare these days. This is pretty much as small as GPUs get unless you're looking more for a "video adapter" than a GPU.)
Intel has many, many solid customers at the government, enterprise and consumer levels.
They will be around.
[1] https://www.maxsun.com/products/intel-arc-pro-b60-dual-48g-t...
Kind of. It's more two 24gb b60s in a trenchcoat. It connects to one slot but it's two completely separate gpus and requires the board to support pcie bifurcation.
And lanes. My board has two PCIe x16 slots fed by the CPU, but if I use both they'll only get x8 lanes each. Thus if I plugged two of these in there, I'd still only have two working GPUs, not four.
The biggest Deepseek V2 models would just fit, as would some of the giant Meta open source models. Those have rather pleasant performance.
In theory, how feasible is that?
I feel like the software stack might be like a Jenga tower. And PCIe limitations might hit pretty hard.
> "Because 48GB is for spreadsheets, feed your rendering beast with a buffet of VRAM."
Edit: I guess must just be a bad translation or sloppy copywriting, and they mean it's not for just spreadsheets rather than it is...
I have this cool and quiet fetish so 70 W is making me extremely interested. IF it also works as a gaming GPU.
I would happily buy 96 Gb for $3490, but this makes very little sense.
Both their integrated and dedicated GPUs have been steadily improving each generation. The Arc line is both cheaper and comparable in performance to more premium NVIDIA cards. The 140T/140V iGPUs do the same to AMD APUs. Their upcoming Panther Lake and Nova Lake architectures seem promising, and will likely push this further. Meanwhile, they're also more power efficient and cooler, to the point where Apple's lead with their ARM SoCs is not far off. Sure, the software ecosystem is not up to par with the competition yet, but that's a much easier problem to solve, and they've been working on that front as well.
I'm holding off on buying a new laptop for a while just to see how this plays out. But I really like how Intel is shaking things up, and not allowing the established players to rest on their laurels.
It clocks in at 1503.4 samples per second, behind the NVidia RTX 2060 (1590.93 samples / sec, released Jan 2019), AMD Radeon RX 6750 XT (1539, May 2022), and Apple M3 Pro GPU 14 cores (1651.85, Oct 2023).
Note that this perf comparison is just ray-tracing rendering, useful for games, but might give some clarity on performance comparisons with its competition.
This reminds me a lot of the LLM craze and how they wanted to charge so much for simple usage at the start until China released deepseek. Ideally we shouldn't rely on China but do we have a choice? the entire US economy has become reliant on monopolies to keep their insanely high stock prices and profit margins