Categories
Computing

Meta expects a billion people in the metaverse by 2030

Meta believes that a billion people will be participating in the metaverse within the next decade, despite the concept feeling very nebulous at the moment.

CEO Mark Zuckerberg spoke with CNBC’s Jim Cramer on a recent broadcast of Mad Money and went on to say that purchases of metaverse digital content would bring in hundreds of billions of dollars for the company by 2030. This would quickly reverse the growing deficit of Meta’s Reality Labs, which has already invested billions into researching and developing VR and AR hardware and software.

Currently, this sounds like a stretch given that only a small percentage of the population owns virtual reality hardware and few dedicated augmented reality devices have been released from major manufacturers. Apple and Google have each developed AR solutions for smartphones and Meta has admitted that the metaverse won’t require special hardware in order to access it.

Any modern computer, tablet, or smartphone has sufficient performance to display virtual content, however, the fully immersive experience is available only when wearing a head-mounted display, whether that takes the form of a VR headset or AR glasses.

According to Cramer, Meta is not taking a cut from creators initially, while planning to continue to invest heavily into hardware and software infrastructure for the metaverse. Meta realizes it can’t build an entire world by itself and needs the innovation of creators and the draw of influencers to make the platform take off in the way Facebook and Instagram have.

Zuckerberg explained that Meta’s playbook has always been to build services that fill a need and grow the platform to a billion or more users before monetizing it. That means the next 5 to 10 years might be a rare opportunity for businesses and consumers to take advantage of a low-cost metaverse experience before Meta begins to demand a share. Just as Facebook was once ad-free, the early metaverse might be blissfully clear from distractions.

This isn’t exclusively Meta’s strategy, but the growth method employed by most internet-based companies. Focusing on growth first and money later has become standard practice. In the future, a balancing act will be required to make enough money to fund services while keeping the metaverse affordable enough to retain users.

While Meta might not get a billion people to strap on a VR headset by 2030, there’s little doubt that the metaverse will become an active area of growth. It should interest enough VR, AR, smartphone, tablet, and computer owners to be self-sustaining within a few years and could actually explode to reach a billion people by 2030.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Meta announces plans to build an AI-powered ‘universal speech translator’

Meta, the owner of Facebook, Instagram, and WhatsApp, has announced an ambitious new AI research project to create translation software that works for “everyone in the world.” The project was announced as part of an event focusing on the broad range of benefits Meta believes AI can offer the company’s metaverse plans.

“The ability to communicate with anyone in any language — that’s a superpower people have dreamed of forever, and AI is going to deliver that within our lifetimes,” said Meta CEO Mark Zuckerberg in an online presentation.

The company says that although commonly spoken languages like English, Mandarin, and Spanish are well catered to by current translation tools, roughly 20 percent of the world’s population do not speak languages covered by these systems. Often, these under-served languages do not have easily accessible corpuses of written text that are needed to train AI systems or sometimes have no standardized writing system at all.

Meta says it wants to overcome these challenges by deploying new machine learning techniques in two specific areas. The first focus, dubbed No Language Left Behind, will concentrate on building AI models that can learn to translate language using fewer training examples. The second, Universal Speech Translator, will aim to build systems that directly translate speech in real-time from one language to another without the need for a written component to serve as an intermediary (a common technique for many translation apps).

In a blog post announcing the news, Meta researchers did not offer a timeframe for completing these projects or even a roadmap for major milestones in reaching their goal. Instead, the company stressed the utopian possibilities of universal language translation.

“Eliminating language barriers would be profound, making it possible for billions of people to access information online in their native or preferred language,” they write. “Advances in [machine translation] won’t just help those people who don’t speak one of the languages that dominates the internet today; they’ll also fundamentally change the way people in the world connect and share ideas.”

Crucially, Meta also envisions that such technology would hugely benefit its globe-spanning products — furthering their reach and turning them into essential communication tools for millions. The blog post notes that universal translation software would be a killer app for future wearable devices like AR glasses (which Meta is building) and would also break down boundaries in “immersive” VR and AR reality spaces (which Meta is also building). In other words, though developing universal translation tools may have humanitarian benefits, it also makes good business sense for a company like Meta.

It’s certainly true that advances in machine learning in recent years have hugely improved the speed and accuracy of machine translation. A number of big tech companies, from Google to Apple, now offer users free AI translation tools, used for work and tourism, and undoubtedly provide incalculable benefits around the world. But the underlying technology has its problems, too, with critics noting that machine translation misses nuances critical for human speakers, injects gendered bias into its outputs, and is capable of throwing up those weird, unexpected errors only a computer can. Some speakers of uncommon languages also say they fear losing hold of their speech and culture if the ability to translate their words is controlled solely by big tech.

Considering such errors is critical when massive platforms like Facebook and Instagram apply such translations automatically. Consider, for example, a case from 2017 when a Palestinian man was arrested by Israeli police after Facebook’s machine translation software mistranslated a post he shared. The man wrote “good morning” in Arabic, but Facebook translated this as “hurt them” in English and “attack them” in Hebrew.

And while Meta has long aspired to global access, the company’s own products remain biased towards countries that provide the bulk of its revenue. Internal documents published as part of the Facebook Papers revealed how the company struggles to moderate hate speech and abuse in languages other than English. These blind spots can have incredibly deadly consequences, as when the company failed to tackle misinformation and hate speech in Myanmar prior to the Rohingya genocide. And similar cases involving questionable translations occupy Facebook’s Oversight Board to this day.

So while a universal translator is an incredible aspiration, Meta will need to prove not only that its technology is equal to the task but that, as a company, it can apply its research fairly.

Repost: Original Source and Author Link

Categories
Computing

Meta just revealed how VR headsets could look in the future

Meta recently previewed a futuristic-looking VR headset concept in a metaverse promotional video. There’s no confirmation that this is an actual product in development, but the new device is clearly much more advanced than a Quest headset and even slimmer than the upcoming Cambria headset.

Fingertip sensors are also shown and might help to quickly identify finger location with great precision, as well as provide haptic feedback.

These glimpses of the future were found as part of Meta’s pattern of posting a few videos each month that represent some near-term hardware and others a bit further out to future VR headsets.

In the most futuristic video, Meta imagines a time when the metaverse might have rendering quality that’s indistinguishable from reality, or perhaps the company simply took artistic license. There’s little doubt that this will be possible someday but it’s hard to say when that might come to pass. Three practical examples of the metaverse were given in the video.

When attending a lecture that’s accessible via the metaverse, students will be able to either be physically present or teleport into a seat and the professor can manipulate virtual 3D objects such as a biological cell to discuss its metabolism. The cell can be tossed to a student and examined more closely as it’s in the process of dividing.

A bit further in, a medical student practices for surgery on a virtual patient using an advanced VR headset and fingertip sensors that might provide greater precision and haptic feedback. This type of training, which can be repeated hundreds or thousands of times, would be very useful before moving on to cadavers for hands-on experience.

Meta VR fingertip sensors shown on someone's hands.

Finally, Meta’s concept video demonstrates history coming to life with modern students visiting ancient Rome and watching Mark Antony engage in a debate over the flaws and merits of Julius Caesar as a ruler. The students can walk around and examine the scene as if actually there.

Another video illustrates the current state of VR and how a father and daughter can connect while fishing despite being separated by nearly 2,000 miles. Meta didn’t identify the app, however, it appears to be Real VR Fishing, a multiplayer fishing simulation that’s available right now for Quest and Quest 2 VR headsets for $20. That’s right, the metaverse is already here in some respects.

While the potential of the future metaverse is certainly very enticing, plenty of work must be done before this vision becomes a reality. The early versions of classrooms, hands-on training, and historical locations already exist in various apps and are well done within the limitations of the current hardware. The near future and what will become possible with more advanced headsets remains to be seen.

The wait won’t be long since Meta’s Cambria, a more expensive VR headset, is expected later this year. It will be interesting to see how well Meta’s Cambria can render the early metaverse and how immersive the experience might be.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

A Meta prototype lets you build virtual worlds by describing them

Meta is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them, and CEO Mark Zuckerberg showed off a prototype at a live event today. Proof of the concept, called Builder Bot, could eventually draw more people into Meta’s Horizon “metaverse” virtual reality experiences. It could also advance creative AI tech that powers machine-generated art.

In a prerecorded demo video, Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him. (Zuckerberg describes this as “all AI-generated.”) Later commands range from broad demands like creating an island to extremely specific requests like adding altocumulus clouds and — in a joke poking fun at himself — a model of a hydrofoil. They also include playing sound effects like “tropical music,” which Zuckerberg suggests is coming from a boombox that Builder Bot created, although it could also have been general background audio. The video doesn’t specify whether Builder Bot draws on a limited library of human-created models or if the AI plays a role in generating the designs.

Several AI projects have demonstrated image generation based on text descriptions, including OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible applications like Dream by Wombo. But these well-known projects involve creating 2D images (sometimes very surreal ones) without interactive components, although some researchers are working on 3D object generation.

As described by Meta and shown in the demo, Builder Bot appears to be using voice input to add 3D objects that users can walk around, and Meta is aiming for more ambitious interactions. “You’ll be able to create nuanced worlds to explore and share experiences with others with just your voice,” Zuckerberg promised during the event keynote. Meta made several other AI announcements during the event, including plans for a universal language translator, a new version of a conversational AI system, and an initiative to build new translation models for languages without large written data sets.

Zuckerberg acknowledged that sophisticated interactivity, including the kinds of usable virtual objects many VR users take for granted, poses major challenges. AI generation can pose unique moderation problems if users ask for offensive content or the AI’s training reproduces human biases and stereotypes about the world. And we don’t know the limits of the current system. So for now, you shouldn’t expect to see Builder Bot pop up in Meta’s social VR platform — but you can get a taste of Meta’s plans for its AI future.

Update 12:50PM ET: Added details about later event announcements from Meta.

Repost: Original Source and Author Link

Categories
Security

Hacking group posted fake Ukrainian surrender messages, says Meta in new report

A Belarus-aligned hacking group has attempted to compromise the Facebook accounts of Ukrainian military personnel and posted videos from hacked accounts calling on the Ukrainian army to surrender, according to a new security report from Meta (the parent company of Facebook).

The hacking campaign, previously labeled “Ghostwriter” by security researchers, was carried out by a group known as UNC1151, which has been linked to the Belarusian government in research conducted by Mandiant. A February security update from Meta flagged activity from the Ghostwriter operation, but since that update, the company said that the group had attempted to compromise “dozens” more accounts, although it had only been successful in a handful of cases.

Where successful, the hackers behind Ghostwriter had been able to post videos that appeared to come from the compromised accounts, but Meta said that it had blocked these videos from being shared further.

The spreading of fake surrender messages has already been a tactic of hackers who compromised television networks in Ukraine and planted false reports of a Ukrainian surrender into the chyrons of live broadcast news. Though such statements can quickly be disproved, experts have suggested that their purpose is to erode Ukrainians’ trust in media overall.

The details of the latest Ghostwriter hacks were published in the first installment of Meta’s quarterly Adversarial Threat Report, a new offering from the company that builds on a similar report from December 2021 that detailed threats faced throughout that year. While Meta has previously published regular reports on coordinated inauthentic behavior on the platform, the scope of the new threat report is wider and encompasses espionage operations and other emerging threats like mass content reporting campaigns.

Besides the hacks against military personnel, the latest report also details a range of other actions conducted by pro-Russian threat actors, including covert influence campaigns against a variety of Ukrainian targets. In one case from the report, Meta alleges that a group linked to the Belarusian KGB attempted to organize a protest event against the Polish government in Warsaw, although the event and the account that created it were quickly taken offline.

Although foreign influence operations like these make up some of the most dramatic details of the report, Meta says that it has also seen an uptick in influence campaigns conducted domestically by repressive governments against their own citizens. In a conference call with reporters Wednesday, Facebook’s president for global affairs, Nick Clegg, said that attacks on internet freedom had intensified sharply.

“While much of the public attention in recent years has been focused on foreign interference, domestic threats are on the rise globally,” Clegg said. “Just as in 2021, more than half the operations we disrupted in the first three months of this year targeted people in their own countries, including by hacking people’s accounts, running deceptive campaigns and falsely reporting content to Facebook to silence critics.”

Authoritarian regimes generally looked to control access to information in two ways, Clegg said: firstly by pushing propaganda through state-run media and influence campaigns, and secondly by trying to shut down the flow of credible alternative sources of information.

Per Meta’s report, the latter approach has also been used to restrict information about the Ukraine conflict, with the company removing a network of around 200 Russian-operated accounts that engaged in coordinated reporting of other users for fictitious violations, including hate speech, bullying, and inauthenticity, in an attempt to have them and their posts removed from Facebook.

Echoing an argument taken from Meta’s lobbying efforts, Clegg said that the threats outlined in the report showed “why we need to protect the open internet, not just against authoritarian regimes, but also against fragmentation from the lack of clear rules.”

Repost: Original Source and Author Link

Categories
AI

Meta has built an AI supercomputer it says will be world’s fastest by end of 2022

Social media conglomerate Meta is the latest tech company to build an “AI supercomputer” — a high-speed computer designed specifically to train machine learning systems. The company says its new AI Research SuperCluster, or RSC, is already among the fastest machines of its type and, when complete in mid-2022, will be the world’s fastest.

“Meta has developed what we believe is the world’s fastest AI supercomputer,” said Meta CEO Mark Zuckerberg in a statement. “We’re calling it RSC for AI Research SuperCluster and it’ll be complete later this year.”

The news demonstrates the absolute centrality of AI research to companies like Meta. Rivals like Microsoft and Nvidia have already announced their own “AI supercomputers,” which are slightly different from what we think of as regular supercomputers. RSC will be used to train a range of systems across Meta’s businesses: from content moderation algorithms used to detect hate speech on Facebook and Instagram to augmented reality features that will one day be available in the company’s future AR hardware. And, yes, Meta says RSC will be used to design experiences for the metaverse — the company’s insistent branding for an interconnected series of virtual spaces, from offices to online arenas.

“RSC will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more,” write Meta engineers Kevin Lee and Shubho Sengupta in a blog post outlining the news.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together.”

Meta’s AI supercomputer is due to be complete by mid-2022.
Image: Meta

Work on RSC began a year and a half ago, with Meta’s engineers designing the machine’s various systems — cooling, power, networking, and cabling — entirely from scratch. Phase one of RSC is already up and running and consists of 760 Nvidia GGX A100 systems containing 6,080 connected GPUs (a type of processor that’s particularly good at tackling machine learning problems). Meta says it’s already providing up to 20 times improved performance on its standard machine vision research tasks.

Before the end of 2022, though, phase two of RSC will be complete. At that point, it’ll contain some 16,000 total GPUs and will be able to train AI systems “with more than a trillion parameters on data sets as large as an exabyte.” (This raw number of GPUs only provides a narrow metric for a system’s overall performance, but, for comparison’s sake, Microsoft’s AI supercomputer built with research lab OpenAI is built from 10,000 GPUs.)

These numbers are all very impressive, but they do invite the question: what is an AI supercomputer anyway? And how does it compare to what we usually think of as supercomputers — vast machines deployed by universities and governments to crunch numbers in complex domains like space, nuclear physics, and climate change?

The two types of systems, known as high-performance computers or HPCs, are certainly more similar than they are different. Both are closer to datacenters than individual computers in size and appearance and rely on large numbers of interconnected processors to exchange data at blisteringly fast speeds. But there are key differences between the two, as HPC analyst Bob Sorensen of Hyperion Research explains to The Verge. “AI-based HPCs live in a somewhat different world than their traditional HPC counterparts,” says Sorensen, and the big distinction is all about accuracy.

The brief explanation is that machine learning requires less accuracy than the tasks put to traditional supercomputers, and so “AI supercomputers” (a bit of recent branding) can carry out more calculations per second than their regular brethren using the same hardware. That means when Meta says it’s built the “world’s fastest AI supercomputer,” it’s not necessarily a direct comparison to the supercomputers you often see in the news (rankings of which are compiled by the independent Top500.org and published twice a year).

To explain this a little more, you need to know that both supercomputers and AI supercomputers make calculations using what is known as floating-point arithmetic — a mathematical shorthand that’s extremely useful for making calculations using very large and very small numbers (the “floating point” in question is the decimal point, which “floats” between significant figures). The degree of accuracy deployed in floating-point calculations can be adjusted based on different formats, and the speed of most supercomputers is calculated using what are known as 64-bit floating-point operations per second, or FLOPs. However, because AI calculations require less accuracy, AI supercomputers are often measured in 32-bit or even 16-bit FLOPs. That’s why comparing the two types of systems is not necessarily apples to apples, though this caveat doesn’t diminish the incredible power and capacity of AI supercomputers.

Sorensen offers one extra word of caution, too. As is often the case with the “speeds and feeds” approach to assessing hardware, vaunted top speeds are not always representative. “HPC vendors typically quote performance numbers that indicate the absolute fastest their machine can run. We call that the theoretical peak performance,” says Sorensen. “However, the real measure of a good system design is one that can run fast on the jobs they are designed to do. Indeed, it is not uncommon for some HPCs to achieve less than 25 percent of their so-called peak performance when running real-world applications.”

In other words: the true utility of supercomputers is to be found in the work they do, not their theoretical peak performance. For Meta, that work means building moderation systems at a time when trust in the company is at an all-time low and means creating a new computing platform — whether based on augmented reality glasses or the metaverse — that it can dominate in the face of rivals like Google, Microsoft, and Apple. An AI supercomputer offers the company raw power, but Meta still needs to find the winning strategy on its own.

Repost: Original Source and Author Link

Categories
Security

Apple and Meta shared data with hackers pretending to be law enforcement officials

Apple and Meta handed over user data to hackers who faked emergency data request orders typically sent by law enforcement, according to a report by Bloomberg. The slip-up happened in mid-2021, with both companies falling for the phony requests and providing information about users’ IP addresses, phone numbers, and home addresses.

Law enforcement officials often request data from social platforms in connection with criminal investigations, allowing them to obtain information about the owner of a specific online account. While these requests require a subpoena or search warrant signed by a judge, emergency data requests don’t — and are intended for cases that involve life-threatening situations.

Fake emergency data requests are becoming increasingly common, as explained in a recent report from Krebs on Security. During an attack, hackers must first gain access to a police department’s email systems. The hackers can then forge an emergency data request that describes the potential danger of not having the requested data sent over right away, all while assuming the identity of a law enforcement official. According to Krebs, some hackers are selling access to government emails online, specifically with the purpose of targeting social platforms with fake emergency data requests.

As Krebs notes, the majority of bad actors carrying out these fake requests are actually teenagers — and according to Bloomberg, cybersecurity researchers believe the teen mastermind behind the Lapsus$ hacking group could be involved in conducting this type of scam. London police have since arrested seven teens in connection with the group.

But last year’s string of attacks may have been performed by the members of a cybercriminal group called Recursion Team. Although the group has disbanded, some of them have joined Lapsus$ with different names. Officials involved in the investigation told Bloomberg that hackers accessed the accounts of law enforcement agencies in multiple countries and targeted many companies over the course of several months starting in January 2021.

“We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse,” Andy Stone, Meta’s policy and communications director, said in an emailed statement to The Verge. “We block known compromised accounts from making requests and work with law enforcement to respond to incidents involving suspected fraudulent requests, as we have done in this case.”

When asked for comment, Apple directed The Verge to its law enforcement guidelines, which state: “If a government or law enforcement agency seeks customer data in response to an Emergency Government & Law Enforcement Information Request, a supervisor for the government or law enforcement agent who submitted the Emergency Government & Law Enforcement Information Request may be contacted and asked to confirm to Apple that the emergency request was legitimate.”

Meta and Apple aren’t the only known companies affected by fake emergency data requests. Bloomberg says hackers also contacted Snap with a forged request, but it’s not clear if the company followed through. Krebs on Security’s report also includes a confirmation from Discord that the platform gave away information in response to one of these fake requests.

“This tactic poses a significant threat across the tech industry,” Peter Day, Discord’s group manager for corporate communications said in an emailed statement to The Verge. “We are continuously investing in our Trust & Safety capabilities to address emerging issues like this one.”

Snap didn’t immediately respond to a request for comment from The Verge.

Update March 30th 9:24PM ET: Updated to include a statement from a Discord spokesperson.

Repost: Original Source and Author Link

Categories
AI

Meta launches PyTorch Live to build AI-powered mobile experiences

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


During its PyTorch Developer Day conference, Meta (formerly Facebook) announced PyTorch Live, a set of tools designed to make AI-powered experiences for mobile devices easier. PyTorch Live offers a single programming language — JavaScript — to build apps for Android and iOS, as well as a process for preparing custom machine learning models to be used by the broader PyTorch community.

“PyTorch’s mission is to accelerate the path from research prototyping to production deployment. With the growing mobile machine learning ecosystem, this has never been more important than before,” a spokesperson told VentureBeat via email. “With the aim of helping reduce the friction for mobile developers to create novel machine learning-based solutions, we introduce PyTorch Live: a tool to build, test, and (in the future) share on-device AI demos built on PyTorch.”

PyTorch Live

PyTorch, which Meta publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open source projects last year, according to GitHub’s 2018 Octoverse report, and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200.

PyTorch Live builds on PyTorch Mobile, a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem, and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live.

PyTorch Mobile launched in October 2019, following the earlier release of Caffe2go, a mobile CPU- and GPU-optimized version of Meta’s Caffe2 machine learning framework. PyTorch Mobile can launch with its own runtime and was created with the assumption that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server.

“For example, if you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” Meta AI software engineer Roman Radle said in a prerecorded video shared with VentureBeat ahead of today’s announcement.

Built-in tools

PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing API, it prepares and integrates custom models to be used with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS.

In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live, as well as provide a more customizable data processing API and support machine learning domains that work with audio and video data.

PyTorch Live

“This is our initial approach of making it easier for [developers] to build mobile apps and showcase machine learning models to the community,” Radle continued. “It’s also an opportunity to take this a step further by building a thriving community [of] researchers and mobile developers [who] share and utilize pilots mobile models and engage in conversations with each other.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Meta Envisions Haptic Gloves As the Future Of the Metaverse

The metaverse seems to be coming, as is the futuristic hardware that will increase immersion in virtual worlds. Meta, the company formerly known as Facebook, has shared how its efforts to usher in that new reality are focusing on how people will actually feel sensations in a virtual world.

The engineers at Meta have developed a number of early prototypes that tackle this goal and they include both haptic suits and gloves that could enable real-time sensations.

Meta’s Reality Labs was tasked to develop, and in many cases invent, new technologies that would enable greater human-computer interaction. The company started by laying out a vision earlier this year about the future of augmented reality (AR) and VR and how to best interact with virtual objects. This kind of research is crucial if we’re moving toward a future where a good chunk of our day is spent inside virtual 3D worlds.

Sean Keller, Reality Labs research director, said that they want to build something that feels just as natural in the AR/VR world as it does in the real world. The problem, he admits, is the technology isn’t yet advanced enough to feel natural and this experience probably won’t arrive for another 10 to 15 years.

According to Keller, we’d ideally use haptic gloves that are soft, lightweight, and able to accurately reproduce the correct pressure, texture, and vibration that corresponds with a virtual object. That requires hundreds of tiny actuators that can simulate physical sensations. Currently, the existing mechanical actuators are too bulky, expensive, and hot to realistically work well. Keller says that it requires softer, more pliable materials.

To solve this problem, the Reality Labs teams turned to research into prosthetic limbs, namely soft robotics and microfluidics. The researchers were able to create the world’s first high-speed microfluidic processor, which is able to control the air flow that moves tiny, soft actuators. The chip tells the valves in the actuators when to move and how far.

Meta researcher holding prototype haptic glove.

The research team was able to create prototype gloves, but the process requires them to be “made individually by skilled engineers and technicians who manufacture the subsystems and assemble the gloves largely by hand.” In order to build haptic gloves at scale for billions of people, new manufacturing processes would have to be invented. Not only do the gloves have to house all of the electronics and sensors, they also have to be slim, lightweight, and comfortable to wear for extended periods of time.

The Reality Labs materials group experimented with various polymers to turn them into fine fibers that could be woven into the gloves. To make it even more efficient, the team is trying to build multiple functions into the fibers including capacitance, conductivity, and sensing.

There have been other attempts at creating realistic haptic feedback. Researchers at the University of Chicago have been experimenting with “chemical haptics.” This involves using various chemicals to simulate different sensations. For example, capsaicin can be used to simulate heat or warmth while menthol does the opposite by simulating coolness.

Meta’s research imto microfluidic processors and tiny sensors woven into gloves may be a bit more realistic than chemicals applied to the skin. It will definitely be interesting to see where Reality Labs takes its research as we move closer to the metaverse.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Meta is acquiring the maker of VR workout app ‘Supernatural’

Facebook made it pretty clear that it’s focusing on the metaverse when it rebranded itself as Meta, and its latest acquisition is part of that effort. Jason Rubin, the company’s VP of Metaverse Content has revealed that Meta is acquiring Within, the creator of immersive virtual reality workout app Supernatural for Oculus Quest headsets. A rep for Within previously described Supernatural to Engadget as “part Beat Saber, part Dance Dance Revolution, part Guitar Hero with your whole body.

In a separate announcement (via TechCrunch), Within CEO Chris Milk and Head of fitness Leanne Pedante said that its coaches, choreographers and managers will continue being part of the team. They’ll work on VR fitness experiences for Supernatural independently under Meta’s Reality Labs. While Within will have to answer to its new parent company going forward, Milk’s and Pedante’s statement says the the acquisition will give them access to more resources, including more music, more features and more social experiences.

In Supernatural, you’ll have to hit colored orbs flying at you from its various VR environments using your controllers. The balls will shatter if you hit them with enough force, but they’ll only float away if you don’t — you’ll get scored at the end based on how you do. Supernatural has a 30-day free trial period, after which it’ll cost you $19 a month for continued access. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link