Categories
AI

Inworld AI joins metaverse innovation with AI-driven virtual characters

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Inworld AI, a company that aims to develop a platform that enables users to create AI-driven virtual characters which can be used to populate virtual worlds, announced today that it has raised $7 million in seed funding.

In an exclusive interview, Inworld’s cofounder and CEO Ilya Gelfenbeyn explained that “Inworld AI is a platform for building, basically brains for virtual characters” to populate virtual environments, including the metaverse, VR, and AR worlds. “What we provide is a toolset that enables developers to add brains and build these characters for the world, for different types of environments.”

To successfully create immersive characters, Inworld AI attempts to mimic the cognitive abilities of the human by leveraging a mixture of AI technologies like natural language understanding and processing, optical character recognition, reinforcement learning, and conversational AI to develop sophisticated virtual characters — characters that can even respond to questions and carry on conversations.

Inworld AI isn’t developing a solution to design visual avatars, but instead aims to create an AI development platform that enables companies that produce digital avatars and virtual characters to add more advanced communication to their visual designs.

The end goal of the platform is to offer a platform that visual avatar providers and organizations can use to develop “characters that can interact naturally with wide-ranging and completely open dialog,” Gelfenbeyn said. Although, speech is just the tip of the iceberg in terms of the communicative capabilities of these AI characters.

As Gelfenbeyn notes, “Inworld characters should not be limited to speech only, but be able to interact with many of the modalities that humans use, such as facial gestures, body language, emotions, as well as physical interactions.”

Enhancing the metaverse experience with AI brains

“We structure our technology stack based on inspiration from the human brain. We have three main components: perception, cognition, and behavior. Perception is focused on input and understanding of the environment and other agents, using senses like audio and visual,”  Gelfenbeyn said.

To enable virtual characters to perceive the environment audibly and visually, the organization uses a complex mixture of speech-to-text, rules engines, natural language understanding, OCR, and event triggers.

The next component is cognition.“Cognition is about the internal states of the character, such as memory, emotion, personality, goals, and background,” he said. Here Inworld AI will use natural language processing, emotion recognition, reinforcement learning, and goal-directed conversational AI to enhance the cognitive abilities of virtual characters.

Finally, “behavior is about the output or interactions of the character, such as speech gestures, body language, and motion.” Technologies like state-of-the-art generative language models, reinforcement learning, and customized voice and emotion synthesis,” enable virtual characters to replicate human gestures and behaviors.

Together, these three components provide a solid framework for developers to build virtual characters that can respond in detail to natural language, perceive the digital environment, and offer significant interactions for users.

Investors include Kleiner Perkins, CRV, and Meta. Inworld AI’s launch is well-timed, with publicity for the metaverse at an all-time high following Facebook’s rebrand to Meta, and decision-makers eager to identify what solutions are available to interact with customers in the metaverse.

As Izhar Harmony, General Partner of CRV explained, “the team is growing rapidly, so now is an exciting time for people interested in VR, games, and virtual worlds to partner with and join the company, so they can be at the forefront of this rapidly growing space.”

New kid on the block 

Inworld AI is entering into the highly competitive space of AI and machine learning development and competing against established providers like Open AI, and Google AI, that let you create machine learning models, yet Inworld AI fulfills a unique gap in the market, as it provides a highly specialized solution for developing conversational AI for AI-driven virtual characters, rather than generic machine learning models.

At the same time, the AI solutions that Inworld AI is developing will enable virtual character creation that extends well beyond the complexity of AI-driven avatars like Pandora Bots and Soul Machines.

“Many existing companies have solutions that provide limited answers to script triggers and dialog. In fact, our team built one of the largest providers of such services (API.ai, acquired by Google and now known as Google Dialogflow) so we are very familiar with their capabilities,” Gelfenbeyn said.

“Other companies are beginning to experiment with new technologies (such as large language models) but we believe that these parts, while essential, only provide one piece of the stack necessary to really bring characters to life,” he said.

In other words, these solutions have only scratched the surface of human-AI interactions, and Inworld AI’s approach to replicate human cognition is designed to create much more intelligent virtual entities. While Inworld AI’s mission to build AI brains for virtual characters is ambitious, the team’s AI development pedigree speaks for itself.

Inworld AI’s founders include a swath of experts such as Gelfenbeyn who was previously the CEO of API.ai, chief technology officer Michael Ermolenko, who led machine learning development at API.ai and the Dialogflow NLU/AI team at Google, and product director Kylan Gibbs, who previously led product for applied generative language models at DeepMind.

With this experienced team, the organization is in a strong position to set the standard for interactive virtual characters. After all, “Widespread success of the metaverse and other immersive applications depends on how enveloping those experiences can be,” said Ilya Fushman, investment partner at Kleiner Perkins.

“Inworld AI is building the engine that enables businesses to provide that exciting depth of experience and captivate users. With the team’s track record in providing developers with the tools they need to build AI-fueled applications, we’re excited to support the company in building the future of immersive experiences,” Fushman explained.

Virtual characters are key for immersion

With the metaverse boom beginning to pick up steam, Inworld AI also has a unique role to play in providing providers with a toolset that they can use to create sophisticated virtual characters and create more compelling digital experiences for users. The level of immersion offered by these experiences will determine whether the metaverse lives or dies.

The types of experiences that developers can use Inworld AI to build are diverse. As Gelfenbeyn explained, “Immersive realities continue to accelerate, with an increasingly diverse and fascinating ecosystem of worlds and use cases.”

“Virtual spaces like Meta’s Horizon Worlds, Roblox, Fortnite, and others that offer unique experiences and enable users to exist in other worlds will also continue to see quick demand from businesses, offering everything from games to story content to new enterprise applications,” Gelfenbeyn said.

Although Gelfenbeyn noted that the technology is simply to enable providers to create a “native population” for the digital world to offer realistic experiences, the metaverse is also becoming a new channel that technical decision-makers can use to interact with customers in the future.

While complete, immersive realities with sophisticated virtual characters are a long way off, Inworld AI’s team’s knowledge of conversational AI will undoubtedly enable other providers to move closer toward building vibrant, virtually populated, and interactive digital worlds.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Lumigo joins race to fill serverless observability gap

Lumigo, a company that aims to plug a gap in serverless monitoring, has announced new features to do just that. The company has also announced its latest capital injection of $29 million in a series A round of funding.

Serverless computing has exploded in popularity as of late because it allows IT departments to run code without thinking about servers. Cloud providers run the servers responsible for the application, and IT pays the provider for only the time they consume.

But how does IT monitor the performance of those applications? In serverless monitoring, the infrastructure comprises transient functions which are restricted within proprietary boundaries. So installing monitoring agents to perform logging and tracing analysis is challenging.

These challenges have given birth to serverless monitoring tools like Lumigo, Epsagon, or the Splunk Observability Suite.

Lumigo’s visual map

Lumigo says it can now monitor containers, Kubernetes, and virtual machines. Both containers and full-fledged virtual machines are now part of Lumigo’s hybrid distributed apps. Lumigo can track service-wise requests inside these apps. It can track latency issues and locate and fix hard-to-reproduce bugs.

Lumigo’s distributed tracing is a one-click solution for developers to seamlessly find and fix issues in serverless and microservices environments. Developers at a host of companies, including Medtronic, Fortinet, Berlitz, Optibus, Symantec, Allianz, leverage the services of Lumigo.

With a virtual stack trace of all services participating in the transaction, Lumigo displays everything in a visual map. It does not need any manual code changes to visualize the entire environment. While you can see the end-to-end execution duration of each service, Lumigo identifies your worst latency offenders. The successful leveraging of machine learning allows Lumigo to preempt issues and raise alerts. Resultantly, the cost implications of these issues remain low.

Several contenders of Lumigo match its capabilities. Epsagon, for instance, builds its services on the notion of distributed tracing. Similar to Lumigo, its AI-powered methods can preempt and neutralize issues before it occurs by raising relevant alerts. Similarly, Splunk’s Observability Suite offers end-to-end observability for serverless applications through tracing and automated incident response mechanisms. Its architecture of microservices makes real-time visibility and performance monitoring a reality.

Overall, these tools help close the serverless-observability gap to a large extent by bringing in a mix of manual and automated observation techniques. The developers can spend more time on writing functional code without having to worry about writing instrumentation. And these are the benefits that are motivating investors to get increasingly involved in companies like Lumigo.

This is the second funding for Lumigo after it had its initial seed round of $8 million two years ago.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Samsung Joins Cloud Gaming Market With Exclusive Service

Samsung recently announced that it is joining the ever-growing cloud gaming battle, although its approach to the service is likely too small-scoped to carve out any of the current market. During the Samsung Developer’s Conference 2021 keynote presentation, the company announced that it is developing its own cloud gaming platform, although it will only be available to certain TV owners.

As with any other cloud gaming service, Samsung’s yet unnamed platform would let users play games without the need for high-end software, like a PS5 or Xbox Series X/S. All they would need instead is a Samsung Tizen Smart TV, various models of which have been available for years now. That being said, the TV market is massive, and the percentage of people that own that model of TV is likely a small slice of the pie.

That makes Samsung’s cloud gaming service stand out from the pack, and not in a great way. Xbox Game Pass, for instance, lets users play games over the cloud on pretty much any screen they have in their house, from phones to laptops. Google Stadia can even run on any TV or computer. Limiting its cloud gaming service to just one model of TV will make it an extremely exclusive service, one that Samsung will have to expand outside of its own tech at some point to gain any meaningful foothold.

Samsung only briefly touched on its cloud gaming platform during its keynote presentation, simply announcing that it exists and is in development. It’s not clear when users will actually be able to start streaming games straight to their Samsung TVs.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Medical device leader Medtronic joins race to bring AI to health care

Medtronic, the world’s largest medical device company, is significantly increasing its investments into AI and other technologies, in what it says is an effort to help the health care industry catch up with other industries.

While many other industries have embraced technology, health care has been slower. Studies reveal that only 20% of consumers would trust AI-generated health care advice.

VentureBeat interviewed Torod Neptune, Medtronic’s senior vice president and chief communications officer, and Gio Di Napoli, president of Medtronic’s Gastrointestinal Unit, to discuss the company’s vision of the future of health care technology.

Digital transformation in health care

Neptune spoke about Medtronic’s transition beyond traditional med tech to more innovative solutions using AI. He noted that health care technology — through its unusual scale and ability to harness data analytics, algorithms, and intelligence — plays a significant role in solving big problems in the AI field.

Artificial intelligence increases the detection of early cancer by 14% compared to normal colonoscopy, Di Napoli said. This is very important because “every percentage of increase in detection reduces the risk of cancer by 2%,” he said.

Building on Medtronic’s medical devices already serving millions (like its miniature pacemaker, smart insulin pump, and more), the company’s plan to make health care more predictive and personal led to the development of GI Genius Intelligent Endoscopy Module (granted USFDA de novo clearance on April 9, 2021, and launched on April 12, 2021).

Medical equipment arranged in shelves on a cart, with a large monitor on top that shows an intestinal scan in progress.

Above: Medtronic says its GI Genius Intelligent Endoscopy Module is the first-to-market computer-aided polyp detection system powered by artificial intelligence.

The GI Genius module is the first and only artificial intelligence system for colonoscopy, according to Medtronic, assisting physicians in detecting precancerous growths and potentially addressing 19 million colonoscopies annually. The company says the module serves as a vigilant second observer, using sophisticated AI-powered technology to detect and highlight the presence of precancerous lesions with a visual marker in real time.

Investing in innovative health care

Medtronic has launched more than 190 health care technology products in the past 12 months. It also invests $2.5 billion yearly on research and development (R&D). Medtronic’s CEO, Geoff Martha, recently announced a 10% boost in R&D spending by FY22.

This enormous investment, the largest R&D increase in company history, underscores Medtronic’s focus on innovation and technology.

The company says it plans to expand the number of patients it serves each year, with the goal being 85 million by FY25.

According to Di Napoli, “AI is here. And it’s here to stay.”

A new era of health care

Speaking further about health care technology, Di Napoli says, “I can tell from my personal experience within the gastrointestinal business that there is a need for training and getting to know artificial intelligence as a partner and not as an enemy. And I think it’s critical for companies like ours to keep collecting data to improve our algorithms, to improve how our customers decide based on this data, and also improve patients outcomes with this.”

Although data collection comes with security concerns and privacy issues, Di Napoli says that the company is in constant communication with the FDA to understand the process to put in place to protect sensitive data for the future.

Neptune believes that technology and data drive patient empowerment in a much more significant way, based on more comfortable user adoption over the last 20 months. He said, “I think the pandemic has enabled more comfort and consideration, and there’s a global shift and willingness to engage and adopt new technological solutions.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Battlefield V Joins Game Pass’ Cloud Gaming Lineup

The Xbox Game Pass library is growing once again, with a treasure trove of games being added to the service in just seven days. While 10 games altogether are coming next week, the clear heavy hitter is Microsoft Flight Simulator. Until it’s available, players can look forward to two new games being added to the service immediately: Battlefield V on cloud and Cris Tales on cloud, console, and PC.

A picture is worth a thousand words (and several upcoming games) https://t.co/JknPMLe9Cq pic.twitter.com/D16z80BrrF

— Xbox Game Pass (@XboxGamePass) July 20, 2021

Other titles are being added to the subscription service’s library in the coming days. Atomicrops (cloud, console, PC), Raji: An Ancient Epic (cloud, console, PC), and Last Stop (cloud, console, PC) are all coming to Xbox Game Pass on July 22.

On July 26, players can jump into Crimson Skies: High Road to Revenge (cloud, console), and an original Xbox classic, Blinx: The Time Sweeper (cloud, console).

The real meat of this month’s Xbox Game Pass offerings comes later on in the month. On July 27, subscribers playing on their Xbox Series X or S can chart a flight across the world in Microsoft Flight Simulator. Just a few days later, on July 29, players get access to Omno (cloud, console, PC), Project Wingman (PC), The Ascent (PC), and the dodgeball fighting game Lethal League Blaze (cloud, console, PC).

Of course, with so many titles being added to the Xbox Game Pass library, some will have to go. Leaving the service on July 31 are It Lurks BelowThe Touryst, and UnderMine. If you’re not ready to lose any of these games just yet, they can be purchased for up to 20% off if you’re a Game Pass subscriber.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Google Measure AR app joins the notorious graveyard

Google is not one to one to “retire” apps, services, and products, sometimes with little prior notice. That’s true for products that have cost Google a lot of money as well as some products that do make Google some money, it’s even more true for those that don’t. Apps come and go, regardless of their importance or significance, and one of the most recent to get the boot is Measure, one of Google’s earliest mobile AR apps and one that it used to highlight its AR technology even before mobile AR became a thing.

Google may have actually been one of the first to push the idea of using phones for AR and VR but those may have now become just footnotes in history. Half a decade or so ago, Google and Lenovo tried to push the idea which was then called Project Tango in a few commercial products, including a gigantic phone and a tablet. That’s all in the past now, and so is one of Project Tango’s earliest apps.

Measure did exactly what its name says, or at least tried to. Using just the phone’s cameras and some AR and AI magic, users would be able to draw lines in the camera’s viewfinder to measure objects. As magical as that may sound, it didn’t always work accurately but could at least offer rough estimates when a proper measuring wasn’t available.

Despite being still listed on Google’s AR and VR experiences site, the Measure AR app no longer exists on Google Play Store. The app will still work for those who still have it installed but you will no longer be able to find it or even reinstall it if you had removed it before. For all intents and purposes, that piece of Google’s AR history is now gone.

Of course, it isn’t uncharacteristic for Google to pull the plug on apps, especially minor ones like Measure. Google does still have other AR experiences available and seems to be betting on WebXR as the way forward. Given how it has been inconsistent in VR and AR commitments, however, silent moves like this might not exactly inspire confidence.

Repost: Original Source and Author Link

Categories
Tech News

YouTube TV joins main video app on some Vizio SmartCast TVs

Only weeks after Google merged its main YouTube and YouTube TV apps on Roku, the company is back with a similar update for the latest Vizio smart TVs. Assuming you have a supported model, you’ll be able to access YouTube TV from within the regular YouTube app, eliminating the need to install and toggle between two different products.

Early last month, Google announced that it had merged YouTube TV with the main YouTube app on Roku. The change followed a dispute between the two companies that resulted in Roku removing the YouTube TV app, but leaving the main YouTube offering.

Google had said at the time that it would bring this change to other devices ‘over time.’ This week brings the latest update to this matter, with Google revealing that YouTube TV is now merged with the main YouTube app for Vizio users who own one of the company’s SmartCast 2020 or 2021 devices.

YouTube TV is found within the main YouTube app on the left-hand side of the interface. You’ll need to scroll to the bottom of the side navigation bar. Users will need to sign in to their YouTube TV account the first time they launch the service from within the main YouTube app, according to Google.

The rollout onto another series of devices makes this merge more than just a workaround to the Roku problem. It’s unclear which devices may be next in the pipeline to get YouTube TV support in the main YouTube app, but users can keep tabs on the platform’s Twitter account for future updates.



Repost: Original Source and Author Link

Categories
AI

Otter.ai automatically joins and transcribes calendared Zoom meetings

Elevate your enterprise data technology and strategy at Transform 2021.


AI-powered transcription company Otter.ai has announced a new integration that automatically joins, records, and transcribes scheduled Zoom meetings.

The Los Altos, California-based company, which raised a fresh $50 million tranche of funding just a few months ago, has offered integrations with Zoom for a while (as well as with Google Meet). However, this latest tie-up goes further by carrying out all the manual steps involved in joining a meeting, transcribing it, and sharing notes with all users.

Assist

Otter Assistant, as the new feature is called, connects with a user’s Google or Outlook calendar (once permissions have been granted) to see when a Zoom meeting is due to start. It then joins the call and starts recording on schedule, with no manual actions required.

For transparency, Otter Assistant shows up on the call as a participant. And each other participant in the call can view the live meeting notes, with support for making notes and highlighting text visible to everyone.

Above: Otter Assistant

One key differentiator from the existing Zoom integration is that this now works with all Zoom calls, regardless of whether the user is the official host.

This launch also serves as a major boost to Zoom’s burgeoning app ecosystem, something the company has been keen to encourage to make its platform more useful and, ultimately, stickier.

Zoom itself has also been on something of a feature launch spree of late, having recently brought Alexa for Business to Zoom conference room calls and rolled out a new “immersive view” to position remote participants in the same virtual room.

The Otter Assistant is available as part of Otter.ai’s business plan, which costs $20 per user per month.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

ArtStation joins Epic Games: What will change

ArtStation announced a collaboration today with Epic Games that could change the future of the art portfolio ecosystem. At first, things will be the same. Epic acquired ArtStation, but ArtStation will “continue to operate as an independently branded platform.” They’ll also be “collaborating closely with the Unreal Engine team.”

ArtStation will reduce some fees for the ArtStation Marketplace – that’s a GOOD thing. The fee will be dropped from the standard 30% fee to 12%. If that drop in percentage seems familiar, perhaps you’re familiar with the drop in fees for software listed with the Microsoft Store, or the Epic Games Store itself. It would appear that Epic is going to continue to lead the charge in lowering this standard fee that’s been in the books for software hosting companies for a while now – 30 to 12 for all!

Another GOOD thing is that ArtStation Learning will be “free to all users for the remainder of 2021.” After that, it might have a fee – it’ll very likely have some sort of fee, that is to say, but we don’t yet know how much it’ll be.

Though ArtStation will be under the Epic umbrella, this does not necessarily mean that ArtStation will change the way it supports all varieties of creators and mediums. In fact an Epic Games statement suggested that “Artists will continue to benefit from ArtStation’s unique platform as they do today and ArtStation will continue to support creators across mediums and genres – including those that do not use Unreal Engine.”

So it would SEEM that this will be a relatively positive move for most people involved with ArtStation. If you’ve used ArtStation in the past, you know there’s a shocking lack of app support through app stores – cross your fingers that this might rectify that. You might also recognize that ArtStation is fantastic, but doesn’t get anywhere near the amount of use that it should, given its benefits over competitors. This might also rectify that – but we shall see!

Repost: Original Source and Author Link

Categories
Game

Fortnite x G.I. Joe crossover is official: Snake Eyes joins the hunt

As anticipated, the character Snake Eyes from the G.I. Joe team has been added to Fortnite as its latest hunter, joining past additions ranging from The Walking Dead zombie-killers to the Terminator and Predator. Epic revealed the character following a teaser posted on its Fortnite Twitter account that essentially confirmed the new hunter.

Last week, Epic published a tweet with the term ‘Ninja Master,’ as well as an audio transmission from Agent Jonesy that ended with the phrase, “And knowing is half the battle.” Anyone familiar with the classic G.I. Joe cartoon could immediately identify it as a teaser about Snake Eyes, the saber-wielding character from the fictional team.

Snake Eyes isn’t part of the Season 5 Battle Pass, though — you’ll have to head into the game’s Item Shop to purchase the skin and its related gear if you want to play as the character. It remains unclear what role the ‘hunters’ will play in the game’s finale, but presumably, they’ll all make an appearance.

What is a Fortnite hunter? You’ll want to watch the Season 5 launch trailer to get an understanding of this season’s storyline and why there are strange portals to other worlds currently on the battle royale island.

Agent Jonesy is currently tripping through time and various dimensions in search of capable hunters who he can bring to the Fortnite universe. Each new hunter added to the game is, in keeping with the storyline, a powerful warrior brought from a different world to temporarily aid the island inhabitants.

Repost: Original Source and Author Link