VR and AI

Two Key Technological Developments in Virtual Reality and Artificial Intelligence

October 20, 2020

It’s getting harder and harder to distinguish sci-fi from the real world these days.


Read on, and you’ll see what we mean.


1.  Simulating the sense of touch in virtual reality.
Scientists at the University of Birmingham studied seismic waves called Rayleigh waves to come up with a universal law to scale the sensitivity of touch.


Wait, what?
Basically, when you touch something, Rayleigh waves are created which travel through layers of skin and bone and get picked up by your body’s touch receptor cells.  This data can be used in virtual reality to simulate the sense of touch.


Universal law of touch
The team of scientists led by Dr Montenegro-Johnson used the mathematics of earthquakes to create the models for how vibrations travel through the skin.


They discovered that “vibration receptors beneath the skin respond to Rayleigh waves in the same way regardless of age, gender, or even species.”


Virtual haptic reality
The researchers from the University of Birmingham conducting the study are part of a European consortium called H-Reality.


H-Reality is on a mission to “imbue virtual objects with a physical presence, providing a revolutionary, untethered, virtual-haptic reality."


Got it, next.


2.  Making AI learn more like how a child learns.
AI is cool and all, but the problem is its super data intensive. To teach an AI to recognize a picture of a rhinoceros, you have to feed it hundreds, even thousands of pics of a rhino.


Why does that matter?  Well, it makes creating the models more expensive because they’re hungry for data.


Compare that to how a child learns—show him or her a few examples, or just one, and they get it.  In fact, sometimes it doesn’t even take one pic­–show a kid a horse and a rhino, and tell them the unicorn is something in between, and they can pick out a unicorn without ever seeing a single pic of a unicorn prior.


Researchers at the University of Waterloo call this “less than one shot” learning.  In their study, they distilled 60,000 images of handwritten numbers down to less than 10 images.  Their models had nearly the same accuracy as the one trained with 60,000 images.  These findings could change the course of AI by making the data sets needed to train models much smaller which is a BFD.


Hey Elon, are we in the Matrix yet? 

Easy peasy to share this story with your peeps

Level up your inbox with The Scroll

Get stories like this delivered to your inbox.

Business news focused on startups and tech. Get informed while being very mildly entertained.
No spam. No fluff. No nonsense. Ever.