Begin: WordPress Article Content
The gulf between â€œhumanâ€ and â€œmachineâ€ is closing. Machine learning has enabled virtual reality to feel more â€œrealâ€ than ever before, and AIâ€™s replication of processes that were once confined to the human brain is ever-improving. Both are bringing technology into ever-closer proximity with the human body.Â Things are getting weird.
And they are going to get a lot weirder.
Letâ€™s use this question as a starting point: Is standing on the edge of the roof of a Minecraft cathedral in VR mode scarier than looking over the edge of a mountain in Norway? I have done both, and the sense of vertigo was greater in Minecraft.
Our brain hasÂ evolvedÂ to let us understand a version of the world we live in, and to make decisions that optimize the survival of our genes. Due to this wiring, a fear of heights is a sensible apprehension to develop: Donâ€™t go near the edge of tall things because you might fall off and die.
In fact, what we see is the brainâ€™s interpretation of the input data provided by our eyes. What we seeÂ is not reality, but is instead our brainâ€™s interpretation of the parts of reality that we have evolved to consider useful. By understanding how we turn â€œthe process of seeingâ€ into â€œwhat we see,â€ the illusions of virtual reality can feel more real than reality itself: for example, Minecraft versusÂ NorwegianÂ mountains.
It will take a long time until humans stop perceiving things like the VR cathedral roof as risks that pose an existential threat. Indeed, over the next few years, we will continue to develop technologies that con the brain into certain interpretations.
At the same time, our understanding of the brain is becoming ever-greater. Modern research into neuroplasticity has shown us thatÂ we can re-train parts of the brainÂ to take over from parts that stop functioning. As our understanding grows, it is not a big leap to believe that we can programmatically adjust the processing of different artificial stimuli to cause much greater slights-of-hand than VR does today.
The tricks that can be played on the aural sense are being exposed by a new wave of smart ear-buds and sound software. TheÂ recently announced Oculus earbudsÂ show their dedication to full immersion, andÂ the app formerly known as H__rÂ experiments with acoustic filtration, turning background noise into harmonies.
The illusions of virtual reality can feel more real than reality itself.
TheÂ eNoseÂ Company â€” the self-described â€œspecialistsÂ in artificial olfactionâ€Â (the science of smelling without a nose) â€” has developed a technology that replicates the function of a human nose. The applications range fromÂ lung healthÂ to the supersession ofÂ sniffer dogs.
With these developments in mind, it is not hard to imagine a full VR rig (headset, earbuds, gloves, maybe even sensors for the nose and mouth) that completely blurs the line between virtual reality and reality itself.
In fact, the virtual experience may offer avenues of perception that reality cannot, especially if we find ways to stimulate chemicals in the brains that strengthen synapses around memories. PerhapsÂ TranscendenceÂ or VR pods (Minority Report) are not so far away.
As a result of these developments, technology is becoming closely merged with our bodies. However, the interplay between technology and the body does not end with VR. It gets even more interesting when you add artificial intelligence to the mix, as AI attempts to replicate the processes of the brain within machines.
Technologists have been trying for decades to use our understanding of the brain to build algorithms to solve highly complex, non-linear problems. Recent months and years have seen more notable breakthroughs than before, due to progress in core algorithms, smart codification of these algorithms and improvements in sheer compute power.
We are still a long way from general AI â€” a model that recreates the entire brain â€” and it is not clear if and when we could get to that point. One limiting factor isÂ that we need to fully understand the brain before we can build a machine that replicates it.
By studying different processes of the brain â€” image recognition, learning a language and so on â€” we can decipher how those processes work and how we learn. Do brain algorithms need to be shown lots of similar things in order to learn, or is the algorithm self-teaching? In other words, is the algorithm â€œsupervisedâ€ or â€œunsupervisedâ€?
Developing truly unsupervised AI will continue to challenge practitioners for years to come, including the technology giants who have embraced (read: made lots of acquisitions in) the industry.
End: WordPress Article Content
Featured Image: Ralf Hiemisch/Getty Images