Categories
Computing

Meta expects a billion people in the metaverse by 2030

Meta believes that a billion people will be participating in the metaverse within the next decade, despite the concept feeling very nebulous at the moment.

CEO Mark Zuckerberg spoke with CNBC’s Jim Cramer on a recent broadcast of Mad Money and went on to say that purchases of metaverse digital content would bring in hundreds of billions of dollars for the company by 2030. This would quickly reverse the growing deficit of Meta’s Reality Labs, which has already invested billions into researching and developing VR and AR hardware and software.

Currently, this sounds like a stretch given that only a small percentage of the population owns virtual reality hardware and few dedicated augmented reality devices have been released from major manufacturers. Apple and Google have each developed AR solutions for smartphones and Meta has admitted that the metaverse won’t require special hardware in order to access it.

Any modern computer, tablet, or smartphone has sufficient performance to display virtual content, however, the fully immersive experience is available only when wearing a head-mounted display, whether that takes the form of a VR headset or AR glasses.

According to Cramer, Meta is not taking a cut from creators initially, while planning to continue to invest heavily into hardware and software infrastructure for the metaverse. Meta realizes it can’t build an entire world by itself and needs the innovation of creators and the draw of influencers to make the platform take off in the way Facebook and Instagram have.

Zuckerberg explained that Meta’s playbook has always been to build services that fill a need and grow the platform to a billion or more users before monetizing it. That means the next 5 to 10 years might be a rare opportunity for businesses and consumers to take advantage of a low-cost metaverse experience before Meta begins to demand a share. Just as Facebook was once ad-free, the early metaverse might be blissfully clear from distractions.

This isn’t exclusively Meta’s strategy, but the growth method employed by most internet-based companies. Focusing on growth first and money later has become standard practice. In the future, a balancing act will be required to make enough money to fund services while keeping the metaverse affordable enough to retain users.

While Meta might not get a billion people to strap on a VR headset by 2030, there’s little doubt that the metaverse will become an active area of growth. It should interest enough VR, AR, smartphone, tablet, and computer owners to be self-sustaining within a few years and could actually explode to reach a billion people by 2030.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

New metaverse standards to address lack of interoperability

Big-name tech companies such as Meta, Microsoft, and Epic Games have formed a standards organization called the Metaverse Standards Forum (MSF). This is meant to be a group that creates open standards for all things metaverse, including virtual reality, augmented reality, and 3D technology.

Over 30 companies have signed on, some of which are deep in metaverse technology like Meta itself. Others include Nvidia, Unity (the creators of the popular game engine), Qualcomm, Sony, and even the web standards organization itself — the Worldwide Web Consortium (W3).

Meta Quest

According to the official press release:

“The Forum will explore where the lack of interoperability is holding back metaverse deployment and how the work of Standards Developing Organizations (SDOs) defining and evolving needed standards may be coordinated and accelerated. Open to any organization at no cost, the Forum will focus on pragmatic, action-based projects such as implementation prototyping, hackathons, plugfests, and open-source tooling to accelerate the testing and adoption of metaverse standards, while also developing consistent terminology and deployment guidelines.”

This seems to imply that many of the future technologies created for the metaverse will include some level of interoperability between companies. That doesn’t mean the metaverse will be the Internet 2.0, but it may allow users to use certain profiles or data across metaverse platforms. In fact, this is directly stated in the press release:

“The metaverse will bring together diverse technologies, requiring a constellation of interoperability standards, created and maintained by many standards organizations,” said Neil Trevett, Khronos president. “The Metaverse Standards Forum is a unique venue for coordination between standards organizations and industry, with a mission to foster the pragmatic and timely standardization that will be essential to an open and inclusive metaverse.”

A vision of Meta's metaverse in the work setting.

Besides the W3, other standards organizations have also joined the Forum, such as the Open AR Cloud, Spatial Web Foundation, and the Open Geospatial Consortium. This gives a lot of weight and much needed legitimacy to the organization, as the metaverse is very much a burgeoning field of technology.

Interestingly, major VR/AR players are conspicuously missing at the moment. Apple, who has already invested much in AR technology and is planning its own headset, has not yet joined the MSF. Niantic, maker of popular AR game Pokemon Go, is also missing from the roster. Protocol also points out that the Roblox Corporation, maker of the wildly successful Roblox game, has also declined to join for now.

While not considered a “metaverse” in the popular usage, Roblox in particular has been able to create an immersive 3D world where people can create entire games within it.

The exclusion of Apple, Niantic, and Roblox isn’t a forgone conclusion, however, as the MSF has just begun. The good thing is that most of the major players in the metaverse tech are agreeing to create some kind of unified standard to make development much easier. The press release named several important technology fields, including avatars, privacy and identity management, and financial transactions.

The Metaverse Standards Forum is scheduled to begin meeting next month.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

‘Fortnite’ Party Worlds are purely social experiences made for the metaverse

Epic has made acquisitions and otherwise signalled plans for a Fortnite metaverse, but its latest move is one of the most obvious yet. The developer has introduced Fortnite Party Worlds, or maps that are solely intended as social spaces to meet friends and play mini games. Unlike Hubs, these environments don’t link to other islands — think of them as final destinations.

The company has collaborated with creators fivewalnut and TreyJTH to offer a pair of example Party Worlds (a theme park and a lounge). However, the company is encouraging anyone to create and submit their own so long as they focus on the same goal of peaceful socialization.

This doesn’t strictly represent a metaverse when Party Worlds live in isolation. At the same time, this shows how far Fortnite has shifted away from its original focuses on battle royale and co-op gaming — there are now islands devoted solely to making friends, not to mention other non-combat experiences like virtual museums and trial courses. We wouldn’t expect brawls to disappear any time soon, but they’re quickly becoming just one part of a much larger experience.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

What is the Metaverse and How Does it Work?

Wondering what the Metaverse is? Chances are you’ve already been there. There is no one perfect description for the concept, but in general, we’re talking about digital interaction and human decision making in a few key ways. The term “Metaverse” comes from the Neal Stephenson science fiction novel Snow Crash, released in 1992 – but it’s come to mean so much more since then.

The Metaverse is Second Life

There were games before it that were similar, but the release of Second Life really struck a chord with pop culture in a way that still rings true today. As Dwight said in The Office, “Second Life is not a game. It is a multiuser virtual environment. It doesn’t have points, or scores, it doesn’t have winners or losers.”

Dwight does a decent job of explaining Second Life as it represents a very rudimentary doorway into the metaverse. Incidentally, the second bit about how he plays Second Life is important to explaining the difference between classic games and the metaverse.

Dwight says he created a version of himself in Second Life that was exactly the same as he was in his real life, except he could fly. The metaverse can be as simple as that, it doesn’t need to be anything as wild and crazy as we see in Ready Player One.

The Metaverse is Ready Player One

The story and movie Ready Player One present a future in which the idea of the metaverse has become so pervasive that people care more about their life in the machine more than they do in real life. In what the book and movie call The Oasis, we see a hosted metaverse with an idyllic (and potentially impossible) sense of freedom and openness.

At the same time, this representation of the metaverse suggests there’ll be one all-encompassing, all-accessible piece of software that’ll host everything and anything. Aside from the reality in which we live today, we’re crossing our fingers and praying that no other singular universe like this dominates our digital world in the future. The implications would be nightmarish.

The Metaverse is Minecraft and Roblox

The first time you play Minecraft, you realize you’ve entered a new realm of “gameplay.” You’re represented by an avatar that has the ability to dig holes, harvest materials, build things, and live a life as you see fit. You can also play games and go fishing.

In Minecraft, creativity comes from a careful balance of limitations and functionality. You control blocks, and you get a sense of accomplishment from achieving goals within an environment that has a clear set of rules. Minecraft was created as a game, and became a platform once its potential was revealed.

Roblox was built as a platform, and is a game platform for creators from the get-go. Roblox was created as a place where content would be generated by users with very few restrictions.

Despite what the very fantastical trailer above shows, the Roblox platform is not as immediately aesthetically pleasing as Minecraft. Because of the relative lack of curation done by those in charge of Roblox, it is not difficult to find glitches and games that are effectively non-functional.

The important bits of both Roblox and Minecraft are in creative potential. Both titles are immersive, and both allow you to create and modify the environment in which you exist.

The Metaverse is not new

The basic building blocks of the metaverse have been around since the early days of the internet. As soon as we started giving ourselves personalized usernames, using fun icons, and building our own webpages – we’ve been in the metaverse for a while, really.

It’s just now that we’re getting to a point where describing these creative environments has become necessary. We’ve entered a point at which an all-digital environment can host more than just a game – it can be the place where we do work, socialize, and effectively live an entire second life.

The Metaverse is Mixed Reality

As Niantic describes the Real-World Metaverse we see the phase we’re entering now. Non-fungible Tokens (NFT) represent one way in which digital goods can be seen as “real” as physical goods. An experience like Pokemon GO shows us how attaching digital goods to our real world can make a platform feel like more than a game.

The potential for the metaverse is massive. Metaverse apps will generate billions in consumer spending from this point forward. Companies that successfully accept and secure their place in this creative digital landscape will find monstrous room for growth.

There is no one metaverse

As an individual, it’s important that you stay aware of the dangers of this new reality. As it is with any phase change in our human experience, there’s room for profit and power, but there’s also room for malicious actors and all manner of people with bad intent.

There’ll be plenty of liars. Lying liars why lie about how their take on the metaverse is the end-all, be-all platform for said metaverse. There is no one single “metaverse”, even if a company has branded their ecosystem as such.

Just as it has always been with the internet, so too is it true of the metaverse – there is on one single authority, only entities. There are plenty of entry points into the ephemeral environment that is the metaverse, and not all elements within this future are compatible. Whatever avenue you choose, and with whomever you choose to interact, be careful – and have fun!

Repost: Original Source and Author Link

Categories
Game

Nike is building its metaverse inside of ‘Roblox’

Meta and Microsoft aren’t the only companies with ambitions for the metaverse. On Thursday, Nike announced a partnership with Roblox to offer a free virtual playspace called Nikeland. In its current iteration, Nikeland includes minigames such as tag, dodgeball and the floor is lava that players can check out with their friends. Mobile integration allows you to use your phone to translate real-life movement into the game. In that way, you can do things like long jumps and fast sprints. Naturally, there’s also a digital showroom where players can get Nike swag for their avatar.

According to CNBC, that’s only the start of what the brand has planned for the space. In the future, it could host competitions tied to global sporting events. For instance, it could host soccer games when the 2022 World Cup kicks off in Qatar. The showroom could also one day tease future product releases and allow users to co-create items.

It’s no surprise to see Nike partner with Roblox on a metaverse play. With more than 200 million estimated monthly active users, it’s one of the most popular games among kids and teenagers. By offering a free space where those young people can interact with the brand, Nike creates an avenue for them to become its customers in the real world.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Meta Envisions Haptic Gloves As the Future Of the Metaverse

The metaverse seems to be coming, as is the futuristic hardware that will increase immersion in virtual worlds. Meta, the company formerly known as Facebook, has shared how its efforts to usher in that new reality are focusing on how people will actually feel sensations in a virtual world.

The engineers at Meta have developed a number of early prototypes that tackle this goal and they include both haptic suits and gloves that could enable real-time sensations.

Meta’s Reality Labs was tasked to develop, and in many cases invent, new technologies that would enable greater human-computer interaction. The company started by laying out a vision earlier this year about the future of augmented reality (AR) and VR and how to best interact with virtual objects. This kind of research is crucial if we’re moving toward a future where a good chunk of our day is spent inside virtual 3D worlds.

Sean Keller, Reality Labs research director, said that they want to build something that feels just as natural in the AR/VR world as it does in the real world. The problem, he admits, is the technology isn’t yet advanced enough to feel natural and this experience probably won’t arrive for another 10 to 15 years.

According to Keller, we’d ideally use haptic gloves that are soft, lightweight, and able to accurately reproduce the correct pressure, texture, and vibration that corresponds with a virtual object. That requires hundreds of tiny actuators that can simulate physical sensations. Currently, the existing mechanical actuators are too bulky, expensive, and hot to realistically work well. Keller says that it requires softer, more pliable materials.

To solve this problem, the Reality Labs teams turned to research into prosthetic limbs, namely soft robotics and microfluidics. The researchers were able to create the world’s first high-speed microfluidic processor, which is able to control the air flow that moves tiny, soft actuators. The chip tells the valves in the actuators when to move and how far.

Meta researcher holding prototype haptic glove.

The research team was able to create prototype gloves, but the process requires them to be “made individually by skilled engineers and technicians who manufacture the subsystems and assemble the gloves largely by hand.” In order to build haptic gloves at scale for billions of people, new manufacturing processes would have to be invented. Not only do the gloves have to house all of the electronics and sensors, they also have to be slim, lightweight, and comfortable to wear for extended periods of time.

The Reality Labs materials group experimented with various polymers to turn them into fine fibers that could be woven into the gloves. To make it even more efficient, the team is trying to build multiple functions into the fibers including capacitance, conductivity, and sensing.

There have been other attempts at creating realistic haptic feedback. Researchers at the University of Chicago have been experimenting with “chemical haptics.” This involves using various chemicals to simulate different sensations. For example, capsaicin can be used to simulate heat or warmth while menthol does the opposite by simulating coolness.

Meta’s research imto microfluidic processors and tiny sensors woven into gloves may be a bit more realistic than chemicals applied to the skin. It will definitely be interesting to see where Reality Labs takes its research as we move closer to the metaverse.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Unity moves robotics design and training to the metaverse

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Unity, the San Francisco-based platform for creating and operating games and other 3D content, on November 10 announced the launch of Unity Simulation Pro and Unity SystemGraph to improve modeling, testing, and training complex systems through AI.

With robotics usage in supply chains and manufacturing increasing, such software is critical to ensuring efficient and safe operations.

Danny Lange, senior vice president of artificial intelligence for Unity, told VentureBeat via email that the Unity SystemGraph uses a node-based approach to model the complex logic typically found in electrical and mechanical systems. “This makes it easier for roboticists and engineers to model small systems, and allows grouping those into larger, more complex ones — enabling them to prototype systems, test and analyze their behavior, and make optimal design decisions without requiring access to the actual hardware,” said Lange.

Unity’s execution engine, Unity Simulation Pro, offers headless rendering — eliminating the need to project each image to a screen and thus increasing simulation efficiency by up to 50% and lowering costs, the company said.

Use cases for robotics

“The Unity Simulation Pro is the only product built from the ground up to deliver distributed rendering, enabling multiple graphics processing units (GPUs) to render the same Unity project or simulation environment simultaneously, either locally or in the private cloud,” the company said. This means multiple robots with tens, hundreds, or even thousands of sensors can be simulated faster than real time on Unity today.

According to Lange, users in markets like robotics, autonomous driving, drones, agriculture technology, and more are building simulations containing environments, sensors, and models with million-square-foot warehouses, dozens of robots, and hundreds of sensors. With these simulations, they can test software against realistic virtual worlds, teach and train robot operators, or try physical integrations before real-world implementation. This is all faster, more cost-effective, and safer, taking place in the metaverse.

“A more specific use case would be using Unity Simulation Pro to investigate collaborative mapping and mission planning for robotic systems in indoor and outdoor environments,” Lange said. He added that some users have built a simulated 4,000 square-foot building sitting within a larger forested area and are attempting to identify ways to map the environment using a combination of drones, off-road mobile robots, and walking robots. The company reports it has been working to enable creators to build and model the sensors and systems of mechatronic systems to run in simulations.

A major application of Unity SystemGraph is how it enables those looking into building simulations with a physically accurate camera, lidar models, and SensorSDK to take advantage of SystemGraph’s library of ready-to-use models and easily configure them to their specific cases.

Customers can now simulate at scale, iterate quickly, and test more to drive insights at a fraction of current simulation costs, Unity says. The company adds that customers like Volvo Cars, Allen Institute of AI, and Carnegie Mellon University are already seeing results.

While there are several companies that have built simulators targeted especially at AI applications like robotics or synthetic data generation, Unity claims that the ease of use of its authoring tools makes it stand out above its rivals, including top competitors like Roblox, Aarki, Chartboost, MathWorks, and Mobvista. Lange says this is evident in the size of Unity’s existing user base of over 1.5 million creators using its editor tools.

Unity says its technology is aimed at impacting the industrial metaverse, where organizations continue to push the envelope on cutting-edge simulations.

“As these simulations grow in complexity in terms of the size of the environment, the number of sensors used in that environment, or the number of avatars operating in that environment, the need for our product increases. Our distributed rendering feature, which is unique to Unity Simulation Pro, enables you to leverage the increasing amount of GPU compute resources available to customers, in the cloud or on-premise networks, to render this simulation faster than real time. This is not possible with many open source rendering technologies or even the base Unity product — all of which will render at less than 50% real time for these scenarios,” Lange said.

The future of AI-powered  technologies

Moving into 2022, Unity says it expects to see a steep increase in the adoption of AI-powered technologies, with two key adoption motivators. “On one side, companies like Unity will continue to deliver products that help lower the barrier to entry and help increase adoption by wider ranges of customers. This is combined with the decreasing cost of compute, sensors, and other hardware components,” Lange said. “Then on the customer adoption side, the key trends that will drive adoption are broader labor shortages and the demand for more operational efficiencies — all of which have the effect of accelerating the economics that drive the adoption of these technologies on both fronts.”

Unity is doubling down on building purpose-built products for its simulation users, enabling them to mimic the real world by simulating environments with various sensors, multiple avatars, and agents for significant performance gains with lower costs. The company says this will help its customers to take the first step into the industrial metaverse.

Unity will showcase the Unity Simulation Pro and Unity SystemGraph through in-depth sessions at the forthcoming Unity AI Summit on November 18, 2021.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Surgeons cautiously embrace medical metaverse

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


At the Future Surgery Show in London, it was clear that surgeons were cautiously embracing a medical metaverse to improve collaboration and medical outcomes. In many ways, the surgical industry has been a leader in embracing cutting-edge technologies like surgical robots, augmented reality, and improved patient modeling. It was equally clear that these pieces are still growing in siloed pockets that are just starting to come together.

“This is the year of robots,” declared Professor Shafi Ahmed, Chief Medical Officer at Medical Realities.

Established medical device leaders like Johnson & Johnson and Medtronic are introducing serious competition to early pioneers like Intuitive Surgical, who demonstrated the first Da Vinci robot in 1997. Medtronic recently secured European Approval for its Hugo line of robots, and Johnson and Johnson have been promoting its new Ottava system. Both companies have also partnered with NVIDIA as part of the Clara line of tools for building out the medical omniverse.

In some ways, this feels like the equivalent of Ford and GM finally jumping into the ring with Tesla. A validation that surgical automation is the next big thing, in the same way, that assisted-driving electric cars are the future of transportation.

But getting there will require not just better tools, but considerable efforts to transform data workflows and governance. Reliably capturing, organizing, and sharing medical data presents numerous cultural, institutional challenges that are more complicated than setting up a Google Street Maps program for human bodies.

Incremental progress

Ahmed, who is also a practicing surgeon at Barts Health NHS Trust in the U.K., has been a bit of a pioneer in this field, having introduced the world to the surgical metaverse in 2016. Over 55,000 people watched the 360-degree live surgical broadcast. The rest of the industry is just catching up with him.

A glance around the show floor revealed that most cutting-edge advances were making practical progress towards this aspiration. One vendor touted the clarity available from the introduction of 4K imaging to the operating room. Braun showed off a new 3D display that allows surgeons to look ahead with better ergonomics during extended operations rather than hunched over a microscope. And Epiqar was highlighting a Zoom-like service for the operating theater with privacy and compliance built-in.

In short, these kinds of incremental advances are likely to provide the most immediate value for the bulk of surgeons. One big issue is that surgeons are still sorting out the privacy and compliance issues with improving the surgical arts. The latest cameras make it easier to record detailed footage of how a surgeon managed to successfully perform a challenging operation. And surgical tool vendors also want to show off how their latest innovation made a big difference.

But the actual data is still a gray area. Hospitals allow surgeons to store and share it as long as it does not contain any identifying information. This gives surgeons some freedom to show off their best moves and learn from their peers. But it also limits the use of this information to improve patient outcomes in the long run. Daniel Goldberg, CEO of Epiqar, said, “This video is not integrated into the patient record, but it should be.”

The next significant advances in surgical automation could benefit from a deeper integration between surgical video and a patient’s medical record. This could help train AI capabilities that guide surgeons, much in the same way that Tesla’s constantly watching cameras could lead to more capable self-driving capabilities.

Waze for surgeons

In the short run, Ahmed believes surgeons are likely to see the most benefits from adding navigation capabilities to the surgical theater. A Google map of the body could help navigate surgical instruments to the right place or alert surgeons when they should consider removing an extra bit of suspect tissue. This could help even expert surgeons work more quickly and accurately, much like Google Maps helps people avoid traffic jams even when they know the route by heart.

But over time, these systems will improve, particularly when they can gather more data, not just about the video but how the surgical instruments are used themselves. For the most part, the existing surgical robots are still directly operated by a human surgeon. They are valuable because they can more control than might be possible with manually wielding a scalpel and forceps.

They also capture data about how the surgeons navigate various procedures. This data is already woven into a Da Vinci surgical simulator, which helps surgeons master new skills or practice before cutting someone open.

Down the road, these could support surgeons in more collaborative ways, much like driver assistance features can automatically break when required. In the immediate future, these systems will play more of a role in augmenting surgeons to do a better job rather than replacing them.

In the interim, surgeons, hospitals, and medical device makers will need to improve their approach to capture, manage, label, and use the data. Ahmed observed, “The hype is that AI will change the world. And now in 2022, AI can provide some good value, but it still does not do the data well.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Nvidia Says the Metaverse Will Be Larger Than the Real World

Nvidia CEO Jensen Huang says that the virtual world will soon be larger than the physical one, not in terms of scale, but in terms of economics. In a Q&A following Nvidia’s fall GTC 2021 event, Huang described a world where companies put a greater focus on developing everything from cars to buildings in the virtual world.

“The virtual world will be larger in economics than the physical world,” the executive said. The comment stems from Nvidia’s Omniverse platform, which unifies A.I. platforms, 3D modeling, simulation, and animation under a single roof. Out of the event, Nvidia announced Omniverse Replicator, which is a tool focused on creating digital twins.

We’re not talking about people here. In April, Nvidia showed how it was able to create a digital twin of a BMW assembly factory. With the digital model, BMW has been able to reorganize machines to accommodate new launches, and even load them into the virtual space to walk around and see the assembly line in progress.

Models of virtual spaces are nothing new, but Omniverse Replicator goes further. It’s not a 3D modeling engine — it’s a synthetic data generation engine. Digital twins are physically simulated, allowing companies, governments, and more to simulate situations through the digital twin to anticipate problems or quickly react to them.

Nvidia Drive Sim, one of the two replicators available now, is an example. Instead of using data in the physical world, which is hard to control for, Drive Sim can generate data based on randomized conditions to better train autonomous vehicles. Nvidia suggests that the applications reach much further, though. Huang said these “virtual worlds will crop up like websites today,” tackling everything from social gatherings and games with friends to wildfires and the best way to combat them.

“Creators will make more things in virtual worlds than they do in the physical world.”

In the future, Huang says that “we will buy and own 3D things, like we buy 2D songs and books today.” The CEO even pointed to a future where we buy and own 3D homes, cars, and art. Perhaps the boldest claim is that “creators will make more things in virtual worlds than they do in the physical world.”

With Facebook’s recent name change to Meta, there has been a lot of talk about the metaverse and how it will impact the future. Huang suggests that the metaverse is the future, where we replace or at least augment the physical world in a dystopian scene ripped straight from a sci-fi novel.

Everyone from Facebook to Apple is in on the virtual world craze. The world’s largest companies are all gunning for the top slot in an innovation that they say will be as significant as the internet.

But will it?

Digital twins and virtual worlds have a lot of applications, particularly in enterprise spaces, solving logistical problems, tackling large-scale threats, and generating data that would otherwise be impossible to gather physically. Whether that makes the jump to the consumer space, as Facebook and others have suggested, is a different matter.

Huang recognized this in the Q&A, saying that “virtual worlds have to be indistinguishable from the real world,” and that’s not where we’re at today. Nvidia announced Avatar at GTC, which is meant to build the A.I. models, voices, and more that will live in these virtual worlds. But a remarkably detailed render of a toy version of the CEO wasn’t enough to distract from the robotic A.I. voice.

Outside of accuracy, virtual worlds have more pressing, real-world issues to overcome. As the internet has already shown, the spread of misinformation has the potential to translate into real-world tragedy, and if left unchecked, the metaverse could amplify those issues like never before.

As for if the metaverse will be larger than the physical world, we’ll just have to wait and see. Regardless, there are a lot of exciting technologies, and a lot of lingering issues to address before we get to that point.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Inworld AI joins metaverse innovation with AI-driven virtual characters

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Inworld AI, a company that aims to develop a platform that enables users to create AI-driven virtual characters which can be used to populate virtual worlds, announced today that it has raised $7 million in seed funding.

In an exclusive interview, Inworld’s cofounder and CEO Ilya Gelfenbeyn explained that “Inworld AI is a platform for building, basically brains for virtual characters” to populate virtual environments, including the metaverse, VR, and AR worlds. “What we provide is a toolset that enables developers to add brains and build these characters for the world, for different types of environments.”

To successfully create immersive characters, Inworld AI attempts to mimic the cognitive abilities of the human by leveraging a mixture of AI technologies like natural language understanding and processing, optical character recognition, reinforcement learning, and conversational AI to develop sophisticated virtual characters — characters that can even respond to questions and carry on conversations.

Inworld AI isn’t developing a solution to design visual avatars, but instead aims to create an AI development platform that enables companies that produce digital avatars and virtual characters to add more advanced communication to their visual designs.

The end goal of the platform is to offer a platform that visual avatar providers and organizations can use to develop “characters that can interact naturally with wide-ranging and completely open dialog,” Gelfenbeyn said. Although, speech is just the tip of the iceberg in terms of the communicative capabilities of these AI characters.

As Gelfenbeyn notes, “Inworld characters should not be limited to speech only, but be able to interact with many of the modalities that humans use, such as facial gestures, body language, emotions, as well as physical interactions.”

Enhancing the metaverse experience with AI brains

“We structure our technology stack based on inspiration from the human brain. We have three main components: perception, cognition, and behavior. Perception is focused on input and understanding of the environment and other agents, using senses like audio and visual,”  Gelfenbeyn said.

To enable virtual characters to perceive the environment audibly and visually, the organization uses a complex mixture of speech-to-text, rules engines, natural language understanding, OCR, and event triggers.

The next component is cognition.“Cognition is about the internal states of the character, such as memory, emotion, personality, goals, and background,” he said. Here Inworld AI will use natural language processing, emotion recognition, reinforcement learning, and goal-directed conversational AI to enhance the cognitive abilities of virtual characters.

Finally, “behavior is about the output or interactions of the character, such as speech gestures, body language, and motion.” Technologies like state-of-the-art generative language models, reinforcement learning, and customized voice and emotion synthesis,” enable virtual characters to replicate human gestures and behaviors.

Together, these three components provide a solid framework for developers to build virtual characters that can respond in detail to natural language, perceive the digital environment, and offer significant interactions for users.

Investors include Kleiner Perkins, CRV, and Meta. Inworld AI’s launch is well-timed, with publicity for the metaverse at an all-time high following Facebook’s rebrand to Meta, and decision-makers eager to identify what solutions are available to interact with customers in the metaverse.

As Izhar Harmony, General Partner of CRV explained, “the team is growing rapidly, so now is an exciting time for people interested in VR, games, and virtual worlds to partner with and join the company, so they can be at the forefront of this rapidly growing space.”

New kid on the block 

Inworld AI is entering into the highly competitive space of AI and machine learning development and competing against established providers like Open AI, and Google AI, that let you create machine learning models, yet Inworld AI fulfills a unique gap in the market, as it provides a highly specialized solution for developing conversational AI for AI-driven virtual characters, rather than generic machine learning models.

At the same time, the AI solutions that Inworld AI is developing will enable virtual character creation that extends well beyond the complexity of AI-driven avatars like Pandora Bots and Soul Machines.

“Many existing companies have solutions that provide limited answers to script triggers and dialog. In fact, our team built one of the largest providers of such services (API.ai, acquired by Google and now known as Google Dialogflow) so we are very familiar with their capabilities,” Gelfenbeyn said.

“Other companies are beginning to experiment with new technologies (such as large language models) but we believe that these parts, while essential, only provide one piece of the stack necessary to really bring characters to life,” he said.

In other words, these solutions have only scratched the surface of human-AI interactions, and Inworld AI’s approach to replicate human cognition is designed to create much more intelligent virtual entities. While Inworld AI’s mission to build AI brains for virtual characters is ambitious, the team’s AI development pedigree speaks for itself.

Inworld AI’s founders include a swath of experts such as Gelfenbeyn who was previously the CEO of API.ai, chief technology officer Michael Ermolenko, who led machine learning development at API.ai and the Dialogflow NLU/AI team at Google, and product director Kylan Gibbs, who previously led product for applied generative language models at DeepMind.

With this experienced team, the organization is in a strong position to set the standard for interactive virtual characters. After all, “Widespread success of the metaverse and other immersive applications depends on how enveloping those experiences can be,” said Ilya Fushman, investment partner at Kleiner Perkins.

“Inworld AI is building the engine that enables businesses to provide that exciting depth of experience and captivate users. With the team’s track record in providing developers with the tools they need to build AI-fueled applications, we’re excited to support the company in building the future of immersive experiences,” Fushman explained.

Virtual characters are key for immersion

With the metaverse boom beginning to pick up steam, Inworld AI also has a unique role to play in providing providers with a toolset that they can use to create sophisticated virtual characters and create more compelling digital experiences for users. The level of immersion offered by these experiences will determine whether the metaverse lives or dies.

The types of experiences that developers can use Inworld AI to build are diverse. As Gelfenbeyn explained, “Immersive realities continue to accelerate, with an increasingly diverse and fascinating ecosystem of worlds and use cases.”

“Virtual spaces like Meta’s Horizon Worlds, Roblox, Fortnite, and others that offer unique experiences and enable users to exist in other worlds will also continue to see quick demand from businesses, offering everything from games to story content to new enterprise applications,” Gelfenbeyn said.

Although Gelfenbeyn noted that the technology is simply to enable providers to create a “native population” for the digital world to offer realistic experiences, the metaverse is also becoming a new channel that technical decision-makers can use to interact with customers in the future.

While complete, immersive realities with sophisticated virtual characters are a long way off, Inworld AI’s team’s knowledge of conversational AI will undoubtedly enable other providers to move closer toward building vibrant, virtually populated, and interactive digital worlds.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link