Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

Michael Abrash leads the team at Meta Reality Labs Research which has been tasked with researching technologies that the company believes could be foundational to XR and the metaverse decades in the future. At Connect 2021, Abrash shared some of the group’s very latest work.

Full-body Codec Avatars

Meta’s Codec Avatar project aims to achieve a system capable of capturing and representing photorealistic avatars for use in XR. A major challenge beyond simply ‘scanning’ a person’s body is getting it to then move in realistic ways—not to mention making the whole system capable of running in real-time so that the avatar can be used in an interactive context.

The company has shown off its Codec Avatar work on various occasions, each time showing improvements. Initially the project started off simply with high quality heads, but it has since evolved to full-body avatars.

The video above is a demo representing the group’s latest work on full-body Codec Avatars, which researcher Yaser Sheikh explains now supports more complex eye movement, facial expressions, and hand and body gestures which involve self-contact. It isn’t stated outright, but the video also shows a viewer watching the presentation in virtual reality, implying that this is all happening in real-time.

With the possibility of such realistic avatars in the future, Abrash acknowledged that it’s important to think about security of one’s identity. To that end he says the company is “thinking about how we can secure your avatar, whether by tying it to an authenticated account, or by verifying identity in some other way.”

Photorealistic Hair and Skin Rendering

While Meta’s Codec Avatars are already looking pretty darn convincing, the research group believes the ultimate destination for the technology is to achieve photorealism.

Above Abrash showed off what he says is the research group’s latest work in photorealistic hair and skin rendering, and lighting thereof. It wasn’t claimed that this was happening in real-time (and we doubt it is), but it’s a look at the bar the team is aiming for down the road with the Codec Avatar tech.

Clothing Simulation

Along with a high quality representation of your body, Meta expects clothing with continue to be an important way that people want to express themselves in the metaverse. To that end, they think that making clothes act realistically will be an important part of that experience. Above the company shows off its work in clothing simulation and hands-on interaction.

High-fidelity Real-time Virtual Spaces

While XR can easily whisk us away to other realities, teleporting friends virtually to your actual living space would be great too. Taken to the extreme, that means having a full-blown recreation of your actual home and everything in it, which can run in real-time.

Well… Meta did just that. They built a mock apartment complete with a perfect replica of all the objects in it. Doing so makes it possible for a user to move around the real space and interact with it like normal while keeping the virtual version in sync.

So if you happen to have virtual guests over, they could actually see you moving around your real world space and interacting with anything inside of it in an incredibly natural way. Similarly, when using AR glasses, having a map of the space with this level of fidelity could make AR experiences and interactions much more compelling.

Presently this seems to serve the purpose of building out a ‘best case’ scenario of a mapped real-world environment for the company to experiment with. If Meta finds that having this kind of perfectly synchronized real and virtual space becomes important to valuable use-cases with the technology, it may then explore ways to make it easy for users to capture their own spaces with similar precision.

EMG Input

While controller and hand-tracking are proven to be useful input devices for XR today, once we move toward all-day worn headsets that we take into public we’re probably not going to want be swing our arms around or poke the air in front of us just to type a message or launch and application while on the bus.

To that end, Meta has been researching a much more subtle and natural means of XR input via wrist-worn EMG. The company detailed some of its work on a wrist-worn input device earlier this year, and at Connect last week for the first time it showed a prototype device being used for text input.

Though the floating window interface on the right is just a simulation, Meta purports that the left view shows an actual prototype device driving the text inputs.

And while it’s slow today in its prototype form, Abrash believes it has huge potential and room for improvement; he calls it “genuinely unprecedented technology.”

Contextual AI

Meta is also working on contextual AI systems to make augmented reality more useful.

In order to help a system understand the intent of the user, Meta used its detailed virtual apartment recreation as an underlying map to help the glasses understand the user’s intent by observing their gaze.

By anticipating intent, Abrash says, the user may only need to have a single button for input which chooses the correct action depending upon the context—like turning on a TV or a light. It’s clear to see how this ‘simple input with context’ dovetails nicely with the company’s concept of a wrist-mounted input device.

– – — – –

Abrash says this overview of some of Meta’s latest R&D projects is just a fraction of what’s being invested in, researched, and built. In order to reach the company’s ultimate vision of the metaverse, he says its “going to take about a dozen major technological breakthroughs […] and we’re working on all of them.”

Reading next

Objects in the Space Sense by Quest system will be visualized
Meta Is Improving Its Developer Ecosystem
......

Leave a comment

All comments are moderated before being published.

This site is protected by hCaptcha and the hCaptcha Privacy Policy and Terms of Service apply.