The talk about pixels is misleading. The research paper doesn't mention pixels. They attached a 3x3 array of piezoelectric elements (roughly 3 cm in diameter each) behind a 13" OLED panel (one picture also shows a 5x4 array), using a sturdy frame structure to minimize interference of acoustic vibrations between the 9 (or 20) elements of the array.
Not to say that this isn't interesting, but it’s not display pixels emitting sound.
Animats · 1d ago
Right.
It's a way of putting speakers behind a display, which will probably be useful. This may improve bass response in laptops, because the speaker area is larger.
lmpdev · 1d ago
Would the vibration be detectable via touch?
It would be wild to integrate this into haptics
skhameneh · 1d ago
Audi somewhat did this with the display in the Q8 E-Tron.
I don’t recall if the display is OLED, but it has excellent haptics.
dzhiurgis · 1d ago
How does it work? You force press (remember 3d touch?) when you find item you after?
skhameneh · 21h ago
It’s fully capacitive with a magnetic resonance actuator, there didn’t seem to be any force inputs nor simulated force with touch radius. The touch input is very responsive.
api · 1d ago
A laptop improvement I'd love would be a haptic keyboard. You could then make the entire laptop waterproof as well as do keyboard reconfiguration for certain tasks.
I always though Apple's Touch Bar was some kind of entry point into that idea but it wasn't, wasn't useful, and just died.
thfuran · 20h ago
Wouldn't it be pretty miserable to actually type on? Not that laptop keyboards are usually very good.
api · 6h ago
I tried a prototype continuous feedback haptic keyboard once. You could feel the “keys”. Took getting used to but it wasn’t bad. It basically tricks your senses into thinking there is something three dimensional there.
I wonder if power consumption is a problem. That requires constant energy use. There’s also been problems with them emitting a faint sound, since you’re creating vibrations.
kragen · 1d ago
They just mounted the display on top of an array of piezo buzzers, is all. But it's true that that wouldn't work very well with an LCD.
teekert · 1d ago
I guess there are limits, like a pixel should never move more then its size, or you limit resolution (at least from some angels). So deep basses are out of the question?
It is getting very interesting, sound, possibly haptics. We already had touch of course, including fingerprint (and visuals of course). We are more and more able to produce richt sensory experiences for panes of glass.
milesvp · 1d ago
Yeah, I don’t know, I suspect you’re right. I know that 25 years ago there were big flat speakers thay drove sound with little holes. Those things could only handle high ends despite being very large themselves. But I know that with DSP you can treat an array of microphones as a single microphone, and the diameter of the array sort of dictates the size of the microphone. Speakers and mics tend to sort of be opposites, in that you can drive or be driven by either. For years I’ve wanted to build a wall of cheap speakers and experiment with this. I suspect I’d find out what we already know, which is to get deep frequencies you have to move a lot of air, and an array of small piezos can never move a lot of air…
That said, it might matter a lot what the substrait was. If it was light and flexible, you could maybe get all the piezos to move it in a way to get a very deep frequency. You could probably get deeper frequncies than the power output of the piezos by taking advantage of the resonance frequency of the material. But you’d be stuck with that one frequency, and there’d be a tradeoff in response time
walterbell · 1d ago
Could such a display also function as a microphone?
GenshoTikamura · 1d ago
A proper telescreen is not the one you watch and listen to, but the one that watches you and listens to you, %little_brother%
orbital-decay · 1d ago
Technically yes, piezo cells are reversible, just like about anything that can be used to emit sound. You can use the array for programmable directionality as well.
No comments yet
jdranczewski · 1d ago
Good point, piezos do also generate voltages when deformed, so this could conceivably be run in reverse... Microphone arrays are already used for directional sound detection, so this could have fun implications for that application given the claimed density of microphones.
Nevermark · 1d ago
You could likely do both at the same time.
The "microphone" would be the measurement of interference between speaker demand and actual response.
nine_k · 1d ago
Would a piezo crystal, mounted behind a screen and the glass panel atop it, have enough sensitivity to capture recognizable speech?
spookie · 1d ago
If you put a dualsense controller atop of glass and send sound through it to the back-left or back-right channels (controls the haptics) you definitely hear it, so I presume yes.
jtthe13 · 1d ago
That’s super impressive. I guess that would work for a notification speaker. But for full sound I have doubts about the low frequencies. I would assume you would need a woofer anyway in a home setting.
I have built those for an outdoor space. They work surprisingly well for mids to highs
toast0 · 1d ago
You could always have a separate speaker for low frequencies. Low frequency sound tends to be perceived with less of a directional component, so if you have nice directionality of the highs and a single source for the lows, that's pretty acceptable. Removing the need to handle highs might make it easier to put together a speaker for the lows in the confines of a modern flat screen.
steelbrain · 1d ago
This is a long shot but anyone know if there's an audio recording of the sound the display produced? Curious
Go to supporting information on that page and open up the mp4 files
IanCal · 1d ago
Good find - the first video is a frequency sweep, video 2 has some music.
Edit - I'm not sure that's the same thing? The release talks about pixel based sound, the linked paper is about sticking an array of piezoelectric speakers to the back of a display.
edit 2- You're right, the press release is pretty poor at explaining this though. It is not the pixels emitting the sound. It's an array of something like 25 speakers arranged like pixels.
I wonder about the mentioned application in mobile devices. with mobile and tablet devices one usually has a very durable glass layer between the screen and the outside world - not sure if sound would ve able topass through that.
tuukkah · 1d ago
> This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds
> The display delivers high-quality audio
Are multiple pixels somehow combined to reproduce low frequencies?
GenshoTikamura · 1d ago
Theoretically, any frequency can be produced by the interference of ultrasonic waves, but the amplitude is questionable, given that these emitters are embedded into a thin substrate.
ComputerGuru · 1d ago
Seems somewhat niche due to physics. When you are ten feet away from a screen (or even three), you can scarcely distinguish between audio emanating from the upper-left “pixel/voxel” (to give a new meaning to an old word) and from the bottom-right, let alone from two adjacent locations.
saltcured · 1d ago
I think you're trying to make an argument similar to those arguing against "retina" displays, i.e. there is some minimum perceptual angular resolution for sound, so information finer than that is pointless? I think you're either underestimating the perceptual resolution or assuming a very small screen at a large distance.
I think that kind of resolution is good enough to overlap a lot of task focused screen fields of view. I have experienced a pretty clear "central" sound stage within 30-45 degrees or so with regular stereo speakers. That field can imply a lot of subtle positioning within it, not even considering wild panning mixes. I'm talking about the kind of realistic recording where it feels like a band in front of you with different instruments near each other but not colocated, like an acoustic ensemble. Obviously you cannot shrink this down to a phone at arm's length or a small embedded control screen and still have the same amount of spatial resolution crammed into a very narrow field.
When I sit with a 14" laptop at a normal distance for typing, it is also easy to distinguish tapping sounds along the screen. I just did a blind test by closing my eyes and having my wife tap the screen with a plastic pen. The interesting question to me, though, is whether that is just perception via the binaural sense, or really incorporating some intuition about the screen's structure. It's response to a tap does clearly vary by distance from bezels and hinges...
ComputerGuru · 1d ago
Impressed that you took the time to run a quick test. Spatial compression is a hard problem though, the most expensive sound bars are easily beat by a cheap 2.1 setup. Phones are (mostly) still mono output even though a speaker out the top and bottom (perpendicular to the viewing angle) would be a win for watching videos in landscape, probably because the improvement wouldn’t be noticeably appreciated (enough to be economically feasible, anyway).
Interesting research all the same, of course!
russdill · 1d ago
If you get enough, you might be able to do some really interesting things using it as a phased array
deadbabe · 1d ago
Not niche at all. You could have a phone for example that plays sounds from areas of the screen where they originate. Key presses, buttons, notification pop ups, etc.
ComputerGuru · 1d ago
You missed (or didn't address, at any rate) my point. For a phone where all audio channels are in between both ears (or even worse, held off to the right/left of both ears) with only a minute difference in the angle of the arc to each of the binaural inputs, convince me that you can reasonably distinguish between sounds emanating from different locations (facing the same direction - not at all like a speaker pointing out the side of each phone!!) at a rate statistically distinguishable from chance.
InitialLastName · 23h ago
With enough speakers coupled on the order of the wavelength of the sound (and for most frequencies, these seem like they will be), you can use beamforming to aim different sounds in different directions from a single source, with speakers facing in only one direction.
For an extreme example of this, refer to the Sphere, where they can target sounds at individual audience members from any arbitrary direction using speakers in the surround display.
ra120271 · 1d ago
It would be interesting to learn in time what this means for the durability of the display. Do the vibrations induce stresses that increase component failure?
Also how differing parts of the screen can generate different sound sources to create a sound scape tailored for the person in front of the screen (eg laptop user)?
Interesting tech to watch!
_joel · 1d ago
I've got a Sony Bravia with similar technology for a few years now (it uses small actuators instead of generating from the oled itself, but the vibrations will be the same) and it's still sounding fantastic after daily use and a couple of moves.
amelius · 1d ago
How can this produce directional sound beams if there is a glass plate covering the display?
pornel · 1d ago
Probably similar to antennas — using phase shifting and interference.
amelius · 1d ago
Yes, probably, but my question was more about the glass cover and if it wouldn't basically destroy the effect?
spookie · 1d ago
If it is touching the glass it amplifies it, if anything.
(Edit: I should say it amplifies some frequencies while attenuating others)
amelius · 1d ago
Can the video and audio be controlled independently?
atoav · 1d ago
Surely? Images play at <120Hz and sound requires you to run at or above the nyquist limit which for human hearing has the standard of 48kHz.
So all AV systems run image and sound separately and thus you can affect them separately.
(I assumed you assumed the changing colors displayed on the pixels with 120Hz somehow drive the sound which needs to change at 48000Hz)
Anything but separate inputs makes no sense giving the magnitudes different frequencies at which it needs to be driven. Also, the color/brightness of a pixel does mean nothing for the sound.
amelius · 1d ago
I mean, can you play video without hearing e.g. a 120Hz hum?
atoav · 1d ago
Hopefully? I guess this depends on the decoupling. Or they just highpass it and leave that region to the sub.
atoav · 1d ago
So you are saying we get dieplays that could run wavefield synthesis?
If you don't know what wavefield synthesis is: you basically have an array of evenly spaced speakers and for each virtual sound source you drive each individual speaker with a specially delayed signal that recreates the wavefield a sound source would create if it occupied that space. This is basically as close as you can get to the thing being in actual space.
Of course the amount of delay lines and processing needed is exhorbitant and for a screen the limiting factor is the physical dimension of the thing, but if you can create high resulation 2D loudspeaker arrays that glow, you can also create ones that do not.
kragen · 1d ago
Because audio runs at such low sample rates, it's not exorbitant by current standards. Suppose you have an 8×16 array of speakers behind your screen, each running at 96ksps. That's only 12.3 million samples per second to generate, on the order of 200 million multiply-accumulates per second for even the most extreme scenarios. Lattice's iCE40UP5K FPGA https://www.farnell.com/datasheets/3215488.pdf#page=10 contains 8 "sysDSP" blocks which can do two 8-bit multiply-accumulates per clock cycle at 50MHz even when pipelining is disabled, so 800 million per second. It's 2.1 by 2.5 mm and costs US$5 at Digi-Key. I'm not familiar with AVX, but I believe your four-core 2GHz AVX512 CPU can do 256 8-bit multiply-accumulates per cycle, five hundred thousand million per second, so we're talking about an exorbitant amount of computation that's 0.04% of your CPU.
atoav · 16h ago
I wasn't thinking of a 16x8 array, I was thinking of an 160x80 array. Spacing your speakers too closely will have diminishing returns, but it depends on the frequency to which you want to operate. If we assume a frequency of 20kHz you should space your speakers at half the wavelength to avoid spatial aliasing artifacts, so something like a speaker every 8mm. This is especially important if the listener position is close.
This means 13000 precise delay lines multiplied with the number of virtual sound sources you want to allow for at the same time let's assume 64. At a sampling rate of 48kHz that means 39x10⁹ Samples per second. That isn't nothing, especially for a consumer device and if we assume the delay values for each of the virtual source-speaker combinations needs to be adjusted on the fly.
kragen · 14h ago
Hmm, I see. I think that you can cheat quite a bit more than that, though, if your objective is only to fool human hearing (as your 48ksps and 20kHz numbers suggest): the humans can only use phase information to detect the directionality of sound up to a few hundred Hz, relying entirely on amplitude attenuation above that, presumably because their neurons run too slow. But maybe your objective is sonar beamforming or something.
dedicate · 1d ago
My mind immediately goes to VR – imagine how much more immersive that would be!
miguelnegrao · 1d ago
For VR, since you have to put the helmet in your head, it is usually assumed users don't mind putting headphones as well, so most solutions are via headphone-based spatialization.
globular-toast · 1d ago
Slight problem being you don't hear through your eyes.
short_sells_poo · 1d ago
Easily solvable by a piece of bread soaked with a drop of lysergic acid diethylamide :D
Ingest. Wait 30-60 minutes. Enjoy the ride.
charlie0 · 1d ago
I would guess the bass is nonexistent here. Cool idea though.
paulbgd · 1d ago
Not this but similar, Sony’s acoustic surface audio for their displays uses actuators behind the screen to vibrate it. it’s not crazy bass but I’d say it’s on par with regular built in tv/laptop speakers
Xss3 · 1d ago
Can this be used like a phased array?
nulld3v · 20h ago
Interesting, I bought a parametric speaker (from Kickstarter of all places [1]), it too uses piezoelectric emitters.
My understanding: DML speakers directly emit audible sound by vibrating at the resonance frequency of a material. Parametric speakers emit ultrasonic waves and control how those waves interfere using phased-array techniques, the interference is what produces sound.
It produces completely different results compared to DML: The sound quality is not good, interference just can't produce the full spectrum of audible frequencies.
In return, you gain extreme control over directionality: the interference only happens in midair or when the ultrasonic waves hits something solid, the speaker itself is mostly silent. It's actually "too directional" for me to use, the beam is still audible after multiple bounces off of solid objects: I was hoping to use it in a office as a sort of "personal speaker", but after the beam strikes my head and I hear sound, a part of it is reflected and bounces off the ceiling and floor repeatedly, causing the speaker to be audible at the opposite end of the floor.
Many TFT and OLED panels today can produce sound unintentionally based on screen contents. This is mostly noticeable with repeating horizontal lines, which tend to produce whining at some fraction of the line frequency. Likely electrostiction.
This here seems to be about adding separate piezoelectric actuators to the display though, it doesn’t seem to use the panel itself.
> by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space.
cubefox · 1d ago
This is impressive. Though perhaps not very useful. Humans (and animals in general) are quite bad at precisely locating sound anyway. We only have two input channels, the right and the left ear, and any location information comes from a signal difference (loudness usually) between the two.
mjlm · 1d ago
Localization of sound is primarily based on the time difference between the ears. Localization is also pretty precise, to within a few degrees under good conditions.
user_7832 · 1d ago
Nit: time difference, phase difference, amplitude difference, and head related transfer function (HRTF) all are involved. Different methods for different frequency localisation.
There's this excellent (German?) for website that lets you play around and understand these via demos. I’ll see if I can find it.
I think for stereo sound, media like music, TV, movies and video games use loudness difference instead of time difference to indicate location.
GenshoTikamura · 1d ago
In music, simple panning works okay, but never exceeds the stereo base of a speaker arrangement. For truly immersive listener experience, audio engineers always employ timing differences and separate spectral treaments of stereo channels, HRTF being the cutting edge of that.
miguelnegrao · 1d ago
I believe Atmos as used in cinema rooms, is as far as I know amplitude based (VBAP probably), and it is impressive and immersive. Immersion depends more on the number and placement of loudspeakers. Some systems do use Ambisonics, which can encode time differences as well, at least from microphone recordings.
HRTF as used in binaural synthesis is for headphones only, not relevant here.
miguelnegrao · 1d ago
Tihs is true, but a high density of loudspeakers allows the use of Wave Field Synthesis which recreates a full physical sound field, where all 3 cues can be used.
badmintonbaseba · 1d ago
At least video games use way more complex models for that, AFAIK. It might be tricky to apply to mixes of recorded media, so loudness is commonly used there.
miguelnegrao · 1d ago
Unreal Engine, the only engine I'm more familiar with, implements VBAP which is just amplitude panning when played through loudspeakers for panning of 3D moving sources. It also allows Ambisonics recordings for ambient sound which is then decoded into 7.1.
For headphone based spatialization (binaral synthesis) usually virtual Ambisonics fed into HRTF convolution is used, which is not amplitude based, specially height is encoded using spectral filtering.
So loudspeakrs -> mostly amplitude based, headphones not amplitude based.
badmintonbaseba · 1d ago
Which makes sense, there is only so much you can do with loudspeakers to affect the perceived location, you don't really know where the loudspeakers and the listener are located relative to each other.
miguelnegrao · 1d ago
Actually, the farther way the speakers are from the angles specified in the 7.1 format (see https://www.dolby.com/about/support/guide/speaker-setup-guid...) worse will be the localization accuracy. And if the the person is not sitting centered relative to the loudspeakers, but closer to one of the loudspeakers localization can completely collapse, and it will sound like the sound only comes from the closest loudspeaker.
In the case of gamers, they are usually centered relative to the loudspeakers, and usually the loudspeakers tend to be placed symmetrical to the computer screen, so the problem is not so bad.
For cinema viewers sitting in the cinema the problem is much worse, most of the audience is off center... That is why 7.1 has a center loudspeakers, the dialogue is sent directly there to make sure that at least the dialogue comes the right direction.
miguelnegrao · 1d ago
I'm sorry, but this is not accurate at all. Using "only" two signals, humans are quite good at localizing sound sources in some directions:
Concerning absolute localization, in frontal position, peak accuracy is observed at 1÷2 degrees for localization in the horizontal plane and 3÷4 degrees for localization in the vertical plane (Makous and Middlebrooks, 1990; Grothe et al., 2010; Tabry et al., 2013).
Humans are quite good at estimating distance too, inside rooms.
Humans use 3 cues for localization, time differences, amplitude differences and spectral cues from outer ears, head, torso, etc. They also use slight head movements to disambiguate sources where the signal differences would be the same (front and back, for instance).
I do agree that humans would not perceive the location difference between two pixels next to each other.
cubefox · 5h ago
As I wrote elsewhere:
> Yet I'm usually not even noticing whether a video has stereo or mono sound. So I highly doubt that ultra precise OLED loudspeakers would make a noticeable difference.
GenshoTikamura · 1d ago
Yep, the hearing is more akin to a hologram than a mere stereo pair imaging.
K0balt · 1d ago
You are misinformed.
Amplitude, spectral, and timing are all integrated into a positional / distance probability mapping. Humans can estimate the vector of a sound by about 2 degrees horizontal and 4 degrees vertical. Distance is also pretty accurate, especially in a room where direct and reflected sounds will arrive at different times, creating interference patterns.
The brain processes audio in a way not too dissimilar from the way that medical imaging scanners can use a small number of sensors to develop a detailed 3d image.
In a perfectly dark room, you can feel large objects by the void they make in the acoustic space of ambient noise and reflected sounds from your own body.
Interestingly, the shape of the ear is such that different phase shifts occur for front and rear positions of reflected and conducted sounds, further improving localization.
We often underestimate the information richness of the sonic sensome, as most live in a culture that deeply favors the visual environment, but some subcultures and also indigenous cultures have learned to more fully explore those sensory spaces.
People of the extreme northern latitudes may spend a much larger percentage of their waking hours in darkness or overwhelming white environments and learn to rely more on sound to sense their surroundings.
PaulHoule · 1d ago
I learned to move around in dark rooms when I was young, I definitely can "feel large objects by the void they make" and people often turn on the lights because they thing I need them to "see" when I really don't.
cubefox · 23h ago
> Humans can estimate the vector of a sound by about 2 degrees horizontal and 4 degrees vertical.
Yet I'm usually not even noticing whether a video has stereo or mono sound. So I highly doubt that ultra precise OLED loudspeakers would make a noticeable difference.
badmintonbaseba · 1d ago
The main utility isn't for the user to more precisely locate the sound source within the screen. Phased speaker arrays allow emitting sound in controlled directions, even multiple sound channels to different directions at the same time.
Not to say that this isn't interesting, but it’s not display pixels emitting sound.
It's a way of putting speakers behind a display, which will probably be useful. This may improve bass response in laptops, because the speaker area is larger.
It would be wild to integrate this into haptics
I always though Apple's Touch Bar was some kind of entry point into that idea but it wasn't, wasn't useful, and just died.
I wonder if power consumption is a problem. That requires constant energy use. There’s also been problems with them emitting a faint sound, since you’re creating vibrations.
It is getting very interesting, sound, possibly haptics. We already had touch of course, including fingerprint (and visuals of course). We are more and more able to produce richt sensory experiences for panes of glass.
That said, it might matter a lot what the substrait was. If it was light and flexible, you could maybe get all the piezos to move it in a way to get a very deep frequency. You could probably get deeper frequncies than the power output of the piezos by taking advantage of the resonance frequency of the material. But you’d be stuck with that one frequency, and there’d be a tradeoff in response time
No comments yet
The "microphone" would be the measurement of interference between speaker demand and actual response.
Edit: Found it: https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.20...
Go to supporting information on that page and open up the mp4 files
Edit - I'm not sure that's the same thing? The release talks about pixel based sound, the linked paper is about sticking an array of piezoelectric speakers to the back of a display.
edit 2- You're right, the press release is pretty poor at explaining this though. It is not the pixels emitting the sound. It's an array of something like 25 speakers arranged like pixels.
https://www.eurekalert.org/news-releases/1084704
But the current one is just wrong.
https://advanced.onlinelibrary.wiley.com/action/downloadSupp...
> The display delivers high-quality audio
Are multiple pixels somehow combined to reproduce low frequencies?
I think that kind of resolution is good enough to overlap a lot of task focused screen fields of view. I have experienced a pretty clear "central" sound stage within 30-45 degrees or so with regular stereo speakers. That field can imply a lot of subtle positioning within it, not even considering wild panning mixes. I'm talking about the kind of realistic recording where it feels like a band in front of you with different instruments near each other but not colocated, like an acoustic ensemble. Obviously you cannot shrink this down to a phone at arm's length or a small embedded control screen and still have the same amount of spatial resolution crammed into a very narrow field.
When I sit with a 14" laptop at a normal distance for typing, it is also easy to distinguish tapping sounds along the screen. I just did a blind test by closing my eyes and having my wife tap the screen with a plastic pen. The interesting question to me, though, is whether that is just perception via the binaural sense, or really incorporating some intuition about the screen's structure. It's response to a tap does clearly vary by distance from bezels and hinges...
Interesting research all the same, of course!
For an extreme example of this, refer to the Sphere, where they can target sounds at individual audience members from any arbitrary direction using speakers in the surround display.
Also how differing parts of the screen can generate different sound sources to create a sound scape tailored for the person in front of the screen (eg laptop user)?
Interesting tech to watch!
So all AV systems run image and sound separately and thus you can affect them separately.
(I assumed you assumed the changing colors displayed on the pixels with 120Hz somehow drive the sound which needs to change at 48000Hz)
Anything but separate inputs makes no sense giving the magnitudes different frequencies at which it needs to be driven. Also, the color/brightness of a pixel does mean nothing for the sound.
If you don't know what wavefield synthesis is: you basically have an array of evenly spaced speakers and for each virtual sound source you drive each individual speaker with a specially delayed signal that recreates the wavefield a sound source would create if it occupied that space. This is basically as close as you can get to the thing being in actual space.
Of course the amount of delay lines and processing needed is exhorbitant and for a screen the limiting factor is the physical dimension of the thing, but if you can create high resulation 2D loudspeaker arrays that glow, you can also create ones that do not.
This means 13000 precise delay lines multiplied with the number of virtual sound sources you want to allow for at the same time let's assume 64. At a sampling rate of 48kHz that means 39x10⁹ Samples per second. That isn't nothing, especially for a consumer device and if we assume the delay values for each of the virtual source-speaker combinations needs to be adjusted on the fly.
Ingest. Wait 30-60 minutes. Enjoy the ride.
My understanding: DML speakers directly emit audible sound by vibrating at the resonance frequency of a material. Parametric speakers emit ultrasonic waves and control how those waves interfere using phased-array techniques, the interference is what produces sound.
It produces completely different results compared to DML: The sound quality is not good, interference just can't produce the full spectrum of audible frequencies.
In return, you gain extreme control over directionality: the interference only happens in midair or when the ultrasonic waves hits something solid, the speaker itself is mostly silent. It's actually "too directional" for me to use, the beam is still audible after multiple bounces off of solid objects: I was hoping to use it in a office as a sort of "personal speaker", but after the beam strikes my head and I hear sound, a part of it is reflected and bounces off the ceiling and floor repeatedly, causing the speaker to be audible at the opposite end of the floor.
[1] https://www.kickstarter.com/projects/1252022192/focusound-th...
This here seems to be about adding separate piezoelectric actuators to the display though, it doesn’t seem to use the panel itself.
> by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space.
There's this excellent (German?) for website that lets you play around and understand these via demos. I’ll see if I can find it.
Edit: found it, it’s https://www.audiocheck.net/audiotests_stereophonicsound.php
HRTF as used in binaural synthesis is for headphones only, not relevant here.
For headphone based spatialization (binaral synthesis) usually virtual Ambisonics fed into HRTF convolution is used, which is not amplitude based, specially height is encoded using spectral filtering.
So loudspeakrs -> mostly amplitude based, headphones not amplitude based.
In the case of gamers, they are usually centered relative to the loudspeakers, and usually the loudspeakers tend to be placed symmetrical to the computer screen, so the problem is not so bad.
For cinema viewers sitting in the cinema the problem is much worse, most of the audience is off center... That is why 7.1 has a center loudspeakers, the dialogue is sent directly there to make sure that at least the dialogue comes the right direction.
Concerning absolute localization, in frontal position, peak accuracy is observed at 1÷2 degrees for localization in the horizontal plane and 3÷4 degrees for localization in the vertical plane (Makous and Middlebrooks, 1990; Grothe et al., 2010; Tabry et al., 2013).
from https://www.frontiersin.org/journals/psychology/articles/10....
Humans are quite good at estimating distance too, inside rooms.
Humans use 3 cues for localization, time differences, amplitude differences and spectral cues from outer ears, head, torso, etc. They also use slight head movements to disambiguate sources where the signal differences would be the same (front and back, for instance).
I do agree that humans would not perceive the location difference between two pixels next to each other.
> Yet I'm usually not even noticing whether a video has stereo or mono sound. So I highly doubt that ultra precise OLED loudspeakers would make a noticeable difference.
Amplitude, spectral, and timing are all integrated into a positional / distance probability mapping. Humans can estimate the vector of a sound by about 2 degrees horizontal and 4 degrees vertical. Distance is also pretty accurate, especially in a room where direct and reflected sounds will arrive at different times, creating interference patterns.
The brain processes audio in a way not too dissimilar from the way that medical imaging scanners can use a small number of sensors to develop a detailed 3d image.
In a perfectly dark room, you can feel large objects by the void they make in the acoustic space of ambient noise and reflected sounds from your own body.
Interestingly, the shape of the ear is such that different phase shifts occur for front and rear positions of reflected and conducted sounds, further improving localization.
We often underestimate the information richness of the sonic sensome, as most live in a culture that deeply favors the visual environment, but some subcultures and also indigenous cultures have learned to more fully explore those sensory spaces.
People of the extreme northern latitudes may spend a much larger percentage of their waking hours in darkness or overwhelming white environments and learn to rely more on sound to sense their surroundings.
Yet I'm usually not even noticing whether a video has stereo or mono sound. So I highly doubt that ultra precise OLED loudspeakers would make a noticeable difference.