Ask HN: Anyone using augmented reality, VR, glasses, helmets etc. in industry?
56 NewUser76312 59 6/25/2025, 3:12:35 PM
Since Google Glass made its debut in 2012, there's been a fair amount of hype around augmented reality and related tech coming into its own in industry, presumably enhancing worker productivity and capabilities.
But I've heard and seen so little use in any industries. I would have thought at a minimum that having access to hands-free information retrieval (e.g. blueprints, instructions, notes, etc), video chat and calls for point-of-view sharing, etc would be quite useful for a number of industries. There do seem to be interesting pilot trials involving Hololens in US defense (IVAS) as well as healthcare telemonitoring in Serbia.
Do you know of any relevant examples or use cases, or are you a user yourself? What do you think are the hurdles - actual usefulness, display quality, cost, something else?
The biggest hurdles is that none of the large companies think there is enough profit to be made from AR. The Hololens 2 is the only headset on the market both capable of running the program required while also being safe to use in a active shop enviroment (VR with passthrough is not suitable). Unfortunately the Hololens 2 is almost 6 years old as is being stretched to the absolute limits of its hardware capabilities. The technology is good but feels like it is only 90% of the way to where it needs to be. Even a simple revision with double the RAM and faster more power efficient processor would alleviate many of the issues we've experienced.
Ultimately from what I've seen, AR is about making the human user better at their job and there are tons of industries where it could have many applications, but tech companies don't actually want to make things that could be directly useful to people that work with their hands, so instead we will just continue to toss more money at AI hoping to make ourselves obsolete.
Quick question about your use case - is the 3D overlay really that important, or would you get most of the value simply seeing the blueprints in your heads-up display, maybe doing a quick finger swipe or voice command to switch between pages/images?
Right now we are just using it for special projects that are complex and have little margin for error. We'd like to be able to use it for everything but that isn't feasible with where the tech is currently stuck at.
I am curious, what size of clients are you working with and how many contracts has it realistically turned into?
I also believe proper AR hardware/software can revolutionize the QA and inspections industry.
What I am noticing is a chicken/egg problem where companies want proof it works, while also reluctant to put their money where their mouth is and invest in the R&D. Which then leads to Microsoft and similar refusing to fully invest in new AR tech.
As such, it all stays mostly in experimental and drawing board land, never quite fully reaching the market.
Thoughts?
QA is the big sales point of the software we are using, but there are many other potential applications for the same product. It should be possible to overlay the model on the main assembly prefab then use that to quickly mark where holes should be drilled and additional pieces attached. The other potential application that is being explored is using the holographic overlays to construct things out of the usual order, instead of building part 1 then starting part 2 since it needs to be built to conform to the first part you can instead build around the hologram so that your not relying on the previously built parts to ensure your angles are correct.
I agree about the chicken/egg problem. Its an emerging technology where the payoff might be a decade away, customers need software that will actually benefit them, developers need reliable hardware capable of running software that has practical uses, and hardware companies want to know there is a customer base. The issue is AR falls under the category of product that the customer does not know they actually want, so the only way it is going to be developed is if one of the hardware manufactures takes a leap of faith and makes the long term investment. Sadly, I feel like AR is a million dollar idea with practical uses that has to contend with a business climate where you can make billions making some doodad that collects private data then displays ads to the masses.
Companies have put billions into R&D, but still haven't delivered a product that surpasses the hurdle rate.
I use VR for gaming. The headsets are uncomfortable after about 45 minutes, they're hot and sweaty, and they're incredibly isolating. All that's fine if you want to slay baddies while alone at home, but utterly propellant to most people.
- Viture Pro XR glasses
- Vuzix Z100 glasses (through Mentra)
The Viture's I use as a lightweight alternative to VR headsets like the Meta Quest. I lay down on the couch/in bed and watch videos while wearing them.
The Vuzix are meant to be daily-wear glasses with a HUD, have yet to break them in.
Later this year, Google/Samsung are due big AR releases, so is Meta I think as well.
It'll be the debut of Android XR.
It's good enough for watching videos, but for working and reading text, I personally haven't used a device with high enough text quality to prevent eye strain.
I'm very bullish on AR though, and I'm willing to bet that consumer grade devices which are genuinely comfortable to work in will become available within the next 2-3 years.
To me, AR is the next step in Human-Computer Interaction while we wait for full BCI (Brain-Computer Interface) devices.
Happy to be proven wrong obviously but so far that's my outlook.
>some combination of way too heavy, expensive, fragile, short battery life, no wifi connectivity, too much UI long to get to point of value and/or simply not useful
Was the screen quality, resolution, visibility in brightness, etc also one of these limiting factors? Or would you say screen quality has gotten reasonable by now?
>The AR/VR use in the field typically came down to looking something up in a manual or calling someone.
That's good to hear as someone interested in the field, I've been skeptical of the fidelity and utility of the fancy augmented 3D overlays.
Ah I see you realized something similar: >The cool AR 3-D demos or overlays rarely worked in the field on real equip or didn't actually convey anything useful (everyone knows the basics of how the machine works).
>Both easily and perhaps more effectively done on a smartphone.
Surely there are some use cases where hands-free operation would be a game changer, but I don't know enough about potential industries where this would be the case.
>The use case we're currently working on is inspections or filling out forms with audio/videos.
That's pretty interesting, do you even need a screen, or just voice? I would think a pretty quick-and-dirty way to do it is to take pdf forms, enumerate (put small numbers) next to every editable field, and then use voice commands like, "write the following in field 3: ...." The purpose of having a screen would be to verify what the LLM + voice is inputting in the form. Then at the end you can tell it to save/submit or whatever.
In my day job I occasionally hear about some AR startup doing demos for training and parts setup in CNC machines but the value add seems to be too insignificant for the work required.
Hurdles? Battery life, proper hardening against dust/water.
Its the opposite- surround projection, so you can put a group in a room into a scenario.
We have another product that's geared towards collaborating and sharing data between teams and vendors, and it seems better suited there, but that one is a web application, and I don't know how well VR glasses are supported there.
I think it'd be awesome in the CAD applications themselves but I don't know if any of them support it out of the box.
Worked great to avoid eye fatigue/posture issues on airplanes though. I'm happy I have them, but in hindsight I'd have gotten a Viture or something with a better nose bridge and a narrower field of view.
- Does your neck get tired?
- Do you ever have to be on video calls? I can't talk to clients looking like a spaceman
It's not actually the weight. I have a Quest 3 with a BoboVR head strap, external battery, etc that all add up to be heavier than the AVP, but I can easily go for multi-hour social sessions with that on without any physical discomfort. You can put a ton of weight on your head with perfect comfort as long as it's balanced properly.
The AVP's real problem is that its ergonomics are just shit. As with a bunch of other things, they designed for the ads instead of actual usability, so it's significantly worse than headsets that are actually much heavier, and the earband design with the way-too-far-back connectors and no top connections makes it nigh impossible for third parties to improve on it.
The closest thing I've seen to making it comfortable is the third-party ResMed Kontor headstrap, and that's being produced in such low numbers that it's functionally impossible to actually buy.
I could see a frequent traveler using an AVP as a "full setup" on the go. In my experience, I can get away with most with a MacBook. Some projects really benefit from the extra screen real estate (and a mechanical keyboard.)
I can get any task done with my laptop. But not a full day's work. And if I want to travel while I work (which I would like to do) then I need a better solution. This is why I'm looking into VR and also a 4K projector, but a projector would have to be able to be seen in a bright environment, and I don't know what the current state of projectors is.
Looks like 3k lumens is your maximum. https://www.google.com/search?q=3000+lumen+projector+in+dayl...
We post case studies regularly on our blog, so you can read about real world deployments there: blog.resolvebim.com
From my experience the hardware is still a hurdle simply because it doesn’t completely replace all pc based workflows right now and therefore has to be used selectively at the right moments alongside 2D monitors.
From your company's landing page, I saw the video and it looks like you're working with in-office project managers and similar white-collar types.
Do you work with any products in the field, like on the job sites? Is that something that would be interesting or valuable? Some examples: letting workers be able to quickly share first-person recorded videos of issues, first-person video chat with supervisors, ability to pull up blueprints and instructions in their heads-up displays, etc? Assuming perhaps a different platform than the Meta, as I don't think fully covered VR would be appropriate for a worksite.
You can see in that video that you can markup the site virtually and yes you can record video, leave issue markers, pull up 2D plans from other tools we integrate with like Procore, ACC, etc. However, it still is primarily a stationary tool on site because of the field of view limitations.
There are some rumors about next gen MR headsets allowing for a "full field of view" by basically removing the head gasket altogether. We'll see.
They use the Apple Vision Pro headset fairly significantly in human interaction and data gathering that they then utilize for simulations.
I spent a lot of time in graduate school researching AR/VR technology (specifically regarding its utility as an accessibility tool) and learning about barriers to adoption.
In my opinion, there are three major hurdles preventing widespread adoption of this modality:
1. *Weight*: To achieve powerful computation like that of the HoloLens, you need powerful processing. The simplest solution to this is to put the processing in the device, which adds weight to it. The HoloLens 2 weighs approximately 566g (or 1.24lb), which is a LOT of weight compared to a pair of traditional glasses, which weigh approximately 20-50g. Speaking as someone who developed with the HL2 for a few years, all-day wear with the device is uncomfortable and untenable. The weight of the device HAS to be comfortable for all-day use, otherwise it hinders adoption.
2. *Battery*: Ironically, making the device smaller to accommodate all-day wear means that you're simultaneously reducing its battery life, which reduces its utility as an all-day wearable: any onboard battery must be smaller, and thus store less energy. This is a problematic trade-off: you don't want the device to weigh too much that people can't wear it, but you also don't want the device to weigh too little that it ceases to have function.
3. *Social Acceptability*: This is where I have some expertise, as it was the subject of my research. Simply put, if a wearer feels as though they stand out by wearing an XR device, they're hesitant to wear it at all when interacting with others. This means that an XR device must not be ostentatious, as the Apple Vision Pro, HoloLens, MagicLeap, and Google Glass all were.
In recent years, there have been a lot of strides in this space, but there's a long way to go.
Firstly, there is increasingly an understanding that the futuristic devices we see in sci-fi cannot be achieved with onboard computation (yet). That said, local, bidirectional, wireless streaming between a lightweight XR device (glasses) and a device with stronger processing power (a la smartphone) provides a potential weigh of offloading computation from the device itself, and simply displaying results onboard.
Secondly, Li+ battery tech continues to improve, and there are now [simple head-worn displays capable of rendering text and bitmaps](https://www.vuzix.com/products/z100-smart-glasses) with a battery life of an entire day. There is also active development work by the folks at [Mentra (YC W25)](https://www.ycombinator.com/companies/mentra) on highlighting these devices' utility, even with their limited processing power.
Lastly, with the first two developments combined, social acceptability is improving dramatically! There are lots of new head-worn displays emerging with varying levels of ability. There was the recent [Android XR keynote](https://www.youtube.com/watch?v=7nv1snJRCEI), which shows some impressive spatial awareness, as well as the [Mentra Live](https://mentra.glass/pages/live) (an open-source Meta Raybans clone). In terms of limited displays with social acceptability, there are the [Vuzix Z100](https://www.vuzix.com/products/z100-smart-glasses), and [Even Realities G1](https://www.evenrealities.com/g1), which can display basic information (that still has a lot of utility!).
As an owner of the Vuzix Z100 and a former developer in the XR space, the progress is slow, but steady. The rapid improvements in machine learning (specifically in STT, TTS, and image understanding) indirectly improve the AR space as well.
At the end of the day, you are asking someone to put something on their face that is still very different ergonomically than glasses (and I’m not sure even glasses would overcome enough friction). The ROI has to overcome the business (or personal) friction of buying the hardware, the friction of the form factor plus any friction from changed workflows.
Now put that in an operational workflow instead of training and the risks go up. Most are still skeptical of device reliability (not to say there aren’t suitable devices for operational roles but the perception is still a hurdle, and the applicability is often device-specific). Now add on to that limited experience with devices (many decision makers have never put one on), added security complications, specialized software development skills, limited content libraries and very real accessibility concerns and a lot of enterprises can never get past an “innovation center demo.”
For many industries the value proposition just isn’t there yet. But that said, I’d recommend digging a little deeper as there’s a lot of existing use-cases and deployments, both failed and successful, outside of IVAS.
Very curious, don't leave us hanging! Assuming it's not confidential.
So ymmv
This is fascinating. What are your most used features?
> extended monitor
Do you also use a real monitor in the field of view?
That said, that might be because the thing that always stops me first is how front-heavy the damn thing is. I do wonder how GP deals with that.
He wouldn’t invest in Palantir either.
Convince the best seed fund in the world that it has a blind spot, maybe some risks will yield something great.