Maybe I missed this, but isn't the underlying concept here big news?
Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?
This seems huge, is there other terminology around this I can kagi to understand more?
Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?
This seems huge, is there other terminology around this I can kagi to understand more?
There are startups working on less intrusive (e.g. headset) brain-computer interfaces (BCI).