Categories
AI

A Meta prototype lets you build virtual worlds by describing them

Meta is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them, and CEO Mark Zuckerberg showed off a prototype at a live event today. Proof of the concept, called Builder Bot, could eventually draw more people into Meta’s Horizon “metaverse” virtual reality experiences. It could also advance creative AI tech that powers machine-generated art.

In a prerecorded demo video, Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him. (Zuckerberg describes this as “all AI-generated.”) Later commands range from broad demands like creating an island to extremely specific requests like adding altocumulus clouds and — in a joke poking fun at himself — a model of a hydrofoil. They also include playing sound effects like “tropical music,” which Zuckerberg suggests is coming from a boombox that Builder Bot created, although it could also have been general background audio. The video doesn’t specify whether Builder Bot draws on a limited library of human-created models or if the AI plays a role in generating the designs.

Several AI projects have demonstrated image generation based on text descriptions, including OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible applications like Dream by Wombo. But these well-known projects involve creating 2D images (sometimes very surreal ones) without interactive components, although some researchers are working on 3D object generation.

As described by Meta and shown in the demo, Builder Bot appears to be using voice input to add 3D objects that users can walk around, and Meta is aiming for more ambitious interactions. “You’ll be able to create nuanced worlds to explore and share experiences with others with just your voice,” Zuckerberg promised during the event keynote. Meta made several other AI announcements during the event, including plans for a universal language translator, a new version of a conversational AI system, and an initiative to build new translation models for languages without large written data sets.

Zuckerberg acknowledged that sophisticated interactivity, including the kinds of usable virtual objects many VR users take for granted, poses major challenges. AI generation can pose unique moderation problems if users ask for offensive content or the AI’s training reproduces human biases and stereotypes about the world. And we don’t know the limits of the current system. So for now, you shouldn’t expect to see Builder Bot pop up in Meta’s social VR platform — but you can get a taste of Meta’s plans for its AI future.

Update 12:50PM ET: Added details about later event announcements from Meta.

Repost: Original Source and Author Link

Categories
AI

What to consider before adopting an intelligent virtual assistant

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Contact centers have evolved to be dynamic communications hubs that have been put to the test the past two years. 

Companies have begun to invest in intelligent virtual assistants (IVAs) because they are effective in improving contact center productivity and the customer experience. However, to get the best return from these virtual assistants, you need to know your strategy. Without clear direction, you ultimately jeopardize customer experience. 

Here are questions to ask and challenges to consider before expanding your IVA strategy. Checking these boxes will help ensure the IVA meets your business needs and customer communications preferences.

Question: What level of complexity will the IVA support?

As I noted above, one of the first and most important questions you should ask is, “What is the general strategy for the IVA?” Is the IVA going to supplement your agents to allow for them to focus on more complex tasks? Or is the IVA going to focus on one or a few very specific use cases (e.g., password reset, bill payments or two-factor authentication)? 

When diving into your IVA strategy, it’s really about knowing the complexity you want the IVA to handle and how many of those inquiries you wish to block from being escalated to live agents. A clear strategy and knowing the complexities that could lie ahead are critical to successful integration. 

Challenge: Understanding the technology

Understanding the technology is central to designing IVAs that will support the required complexity. Knowing the differences between IVAs and other contact center solutions such as chatbots, voicebots and interactive voice response, known as IVR, will help you guarantee your IVA can effectively support specific use cases, regardless of complexity. Below are different contact center technologies and their key differences.

  • Chatbot: A chatbot is a program that can automatically communicate with a user without a human agent’s help. They have limited capabilities and typically interact via text. Chatbots are rule-based and task-specific, which allows them to pose questions based on predetermined options. They lack sophistication and  will not make any inferences from previous interactions with customers. Chatbots are best suited as a question and answer use cases. 
  • Voicebot: Voicebots and chatbots have similar functionality. The main difference from a chatbot and voicebot is the channel. Voicebots involve more complexity as they incorporate speech-to-text, which allows callers to speak to the bot. These solutions use IVR software.
  • IVR: Briefly mentioned above, IVR software is an automated phone system technology that interacts with callers and gathers information based on how the caller navigates a call menu. It does not use AI. Callers move through menu options through spoken responses or by pressing numbers on their phones. IVR software routes the caller to specific departments or specialists. Some may consider an IVR to be a simple voicebot.
  • IVA: An intelligent virtual assistant is the most sophisticated of the options and you can use it across various channels. IVAs process natural language requests using natural language understanding or natural language processing and understand situational context, allowing them to handle a more complex range of questions and interactions. These tools closely resemble human speech and can understand queries with spelling and grammatical errors, slang or another potentially confusing language, much like a human agent.

You’re better equipped to advance existing contact center communications strategies when you understand IVAs, the full volume of capabilities they offer and how they differ from other AI-enabled solutions. 

Question: What persona should the intelligent virtual assistant represent?

For an IVA to be effective, you must understand the persona you want the virtual assistant to represent. This persona will inform how you design your virtual assistant to act based on your company’s brand. To know the persona, you need to know how your customers engage with the contact center and the complexity of the skills that the assistants — live and virtual — need to be able to manage. 

Based on these defining characteristics, you can set business rules for the IVA. These rules then create the standard for how to design the IVA. Key questions to answer to uncover persona include:

  • Should the voice be female? Male?
  • Should it have an accent?
  • How many languages should it be able to speak?
  • Will it need to be familiar with jargon from a particular industry?
  • Should it have a casual tone and follow a more informal language model? Or should it be formal and professional?
  • How will customers speak to the IVA?

Answering these questions will guide you in designing an effective IVA that you can scale for your brand.

Challenge: Lack of collaboration between IT and CX teams

IT teams often work closely with a communications provider to design and implement the IVA. Though they support this process, IT teams typically don’t engage with the customers and might not have a clear picture of their engagement preferences. You can overcome this challenge by increasing collaboration between IT and customer experience (CX) teams.

For example, CX team members can provide insight into the company rules for customer support and how the business manages interaction paths and escalation levels. In banking, this might include the ability for a caller to create a payment plan with an IVA over the phone; however, if the IVA hears a specific balance figure or concern through a particular phrase, it knows to connect the caller to a human agent. If the IVA doesn’t have this level of business logic, the company can jeopardize the customer experience.

CX team members are also knowledgeable about how to create personas for customers and how to understand their engagement preferences. They’re also aware of standard industry terms that customers might use when interacting with an IVA that the IT team might not consider. Once IT teams know these terms, they can then create training models for the IVA that include the common terms and phrases.

What the future holds for intelligent virtual assistants

One current limitation of IVAs is that they sometimes lack visual engagement. It will be interesting to see IVAs evolve to video channels in the coming years. With video, customer support teams, through the use of IVAs, would use biometrics to understand people’s body language and experience, make inferences about their experience and sentiment and automate video support experiences or escalate to an agent. 

For example, in healthcare settings, if someone with a severe illness called their doctor’s office and communicated via IVA-enabled video, the IVA could visually pick up on common symptoms the patient demonstrates. This might include lack of focus, inability to maintain eye contact, drowsiness, etc. The IVA can then note these visible symptoms in the patient’s chart to inform the team of nurses and doctors. The potential of this technology is exciting.

Answering essential questions and addressing challenges related to using IVAs early in the investment process will help you optimize your strategies to leverage automated and intelligent solutions that improve customer experiences. As you deepen your IVA strategies, you’ll better understand the technology’s potential, improve customer experiences and see positive impacts on your operations.

Tim Wurth is director of product management at Intrado.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Nvidia wants to fill the virtual and physical worlds with AI avatars

Nvidia has announced a new platform for creating virtual agents named Omniverse Avatar. The platform combines a number of discrete technologies — including speech recognition, synthetic speech, facial tracking, and 3D avatar animation — which Nvidia says can be used to power a range of virtual agents.

In a presentation at the company’s annual GTC conference, Nvidia CEO Jensen Huang showed off a few demos using Omniverse Avatar tech. In one, a cute animated character in a digital kiosk talks a couple through the menu at a fast food restaurant, answering questions like which items are vegetarian. The character uses facial-tracking technology to maintain eye-contact with the customers and respond to their facial expressions. “This will be useful for smart retail, drive-throughs, and customer service,” said Huang of the tech.

In one demo, Nvidia’s avatar tech was used to create a cute character that talked a couple through a menu.

In another demo, an animated toy version of Huang answered questions about topics including climate change and protein production, and in a third, someone used a realistic animated avatar of themselves as a stand-in during a conference call. The caller was wearing casual clothes in a busy cafe, but their virtual avatar was dressed smartly and spoke without any background noise impinging. This last example builds on Nvidia’s Project Maxine work, which aims to improve common problems with video conferencing (like low quality streams and maintaining eye contact) with the help of machine learning fixes.

(You can see the toy version of Huang in the video below, starting at 28 minutes. Or skip forward to 1 hour, 22 minutes to see the kiosk demo.)

The Omniverse Avatar announcement is part of Nvidia’s inescapable “omniverse” vision — a grandiose bit of branding for a nebulous collection of technologies. Like the “metaverse,” the “omniverse” is basically about shared virtual worlds that allow for remote collaboration. But compared to the vision put forward by Facebook-owner Meta, Nvidia is less concerned with transporting your office meetings into virtual reality and more about replicating industrial environments with virtual counterparts and — in the case of its avatar work — creating avatars that interact with people in the physical world.

As ever with these presentations, Nvidia’s demos looked fairly slick, but it’s not clear how useful this technology will be in the real world. With the kiosk character, for example, it’s not clear if customers will actually prefer this sort of interactive experience to simply selecting the items they want from a menu. Huang noted in the presentation that the avatar has a two-second response time — slower than a human, and bound to cause frustrations if customers are in a rush. Similarly, although the company’s Project Maxine tech looks flash, we’ve yet to see it make a significant impact in the real world.

Repost: Original Source and Author Link

Categories
Game

Microsoft’s virtual Xbox museum is a very detailed stroll down memory lane

If you haven’t heard by now, the Xbox brand turned 20 this year. With anniversary livestreams, controllers, and even a surprise Halo Infinite multiplayer release, we’re not sure how you could have missed the news, but that’s neither here nor there. The anniversary train hasn’t stopped rolling yet, as Microsoft has launched a new virtual museum that takes us through the history of Xbox.

From 1990s concept to present day

At first blush, a virtual museum celebrating 20 years of Xbox might sound a bit self-indulgent, but it’s well worth visiting for any Xbox fans out there. The browser-based museum starts you right at the beginning of the Xbox’s history, when Microsoft’s DirectX team began developing the Xbox as a competitor to the upcoming PlayStation 2.

From there, we’re taken through many of the significant events in Xbox history, looking at the development and reveal of the first console and the subsequent launches of other consoles that comprise the Xbox family. It isn’t just console releases that the museum covers, as big events like the launch of Kinect and Microsoft’s acquisition of Mojang are included in the museum. We also get a look at some of the stumbles in Xbox history, with the museum covering the Xbox 360’s “Red Ring of Death” problem, too.

Visitors to the museum get to use avatars to run through a digital track that takes them through the history of each console. There’s also a separate museum for Xbox’s biggest franchise, Halo, which shows all of the major happenings in that franchise alongside Xbox history. You might want to set aside some time over the upcoming holiday weekend to explore the museum, as seeing every exhibit and watching every video will take quite a while.

A quick note: we’ve tried visiting the Xbox museum in both Chrome and Edge, and for us, at least, the museum runs much more smoothly in Edge. Perhaps that’s not a coincidence, but, in any case, if you have Edge installed on your machine, you might want to start by using that browser.

The biggest exhibit is you

While the trip down Xbox memory lane is cool, the virtual museum also recaps the Xbox histories of the players visiting. Logging into your Microsoft account will show you statistics on your years with Xbox, dating all the way back to the original Xbox (assuming you actually connected a LAN cable to it and signed into the early iteration of Xbox Live).

For instance, even though I had an original Xbox back in the day, I never connected it to the internet, so as far as Microsoft is concerned, my first Xbox console was the Xbox 360. The first Xbox game Microsoft has a record of me playing is Halo 3, and my first sign-on to Xbox Live was on October 2nd, 2007.

These statistics go pretty deep, showing you the first time you logged in on each Xbox console throughout the years, the first game you played on each of those consoles, and even the first time you played your most-played Xbox game of all time (for me, that date is September 25th, 2010 and the game in question is Halo: Reach).

The virtual Xbox museum is a very fascinating trip, and it’s something that all Xbox users should check out, if for no other reason than to see their history with the consoles.

Repost: Original Source and Author Link

Categories
AI

Inworld AI joins metaverse innovation with AI-driven virtual characters

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Inworld AI, a company that aims to develop a platform that enables users to create AI-driven virtual characters which can be used to populate virtual worlds, announced today that it has raised $7 million in seed funding.

In an exclusive interview, Inworld’s cofounder and CEO Ilya Gelfenbeyn explained that “Inworld AI is a platform for building, basically brains for virtual characters” to populate virtual environments, including the metaverse, VR, and AR worlds. “What we provide is a toolset that enables developers to add brains and build these characters for the world, for different types of environments.”

To successfully create immersive characters, Inworld AI attempts to mimic the cognitive abilities of the human by leveraging a mixture of AI technologies like natural language understanding and processing, optical character recognition, reinforcement learning, and conversational AI to develop sophisticated virtual characters — characters that can even respond to questions and carry on conversations.

Inworld AI isn’t developing a solution to design visual avatars, but instead aims to create an AI development platform that enables companies that produce digital avatars and virtual characters to add more advanced communication to their visual designs.

The end goal of the platform is to offer a platform that visual avatar providers and organizations can use to develop “characters that can interact naturally with wide-ranging and completely open dialog,” Gelfenbeyn said. Although, speech is just the tip of the iceberg in terms of the communicative capabilities of these AI characters.

As Gelfenbeyn notes, “Inworld characters should not be limited to speech only, but be able to interact with many of the modalities that humans use, such as facial gestures, body language, emotions, as well as physical interactions.”

Enhancing the metaverse experience with AI brains

“We structure our technology stack based on inspiration from the human brain. We have three main components: perception, cognition, and behavior. Perception is focused on input and understanding of the environment and other agents, using senses like audio and visual,”  Gelfenbeyn said.

To enable virtual characters to perceive the environment audibly and visually, the organization uses a complex mixture of speech-to-text, rules engines, natural language understanding, OCR, and event triggers.

The next component is cognition.“Cognition is about the internal states of the character, such as memory, emotion, personality, goals, and background,” he said. Here Inworld AI will use natural language processing, emotion recognition, reinforcement learning, and goal-directed conversational AI to enhance the cognitive abilities of virtual characters.

Finally, “behavior is about the output or interactions of the character, such as speech gestures, body language, and motion.” Technologies like state-of-the-art generative language models, reinforcement learning, and customized voice and emotion synthesis,” enable virtual characters to replicate human gestures and behaviors.

Together, these three components provide a solid framework for developers to build virtual characters that can respond in detail to natural language, perceive the digital environment, and offer significant interactions for users.

Investors include Kleiner Perkins, CRV, and Meta. Inworld AI’s launch is well-timed, with publicity for the metaverse at an all-time high following Facebook’s rebrand to Meta, and decision-makers eager to identify what solutions are available to interact with customers in the metaverse.

As Izhar Harmony, General Partner of CRV explained, “the team is growing rapidly, so now is an exciting time for people interested in VR, games, and virtual worlds to partner with and join the company, so they can be at the forefront of this rapidly growing space.”

New kid on the block 

Inworld AI is entering into the highly competitive space of AI and machine learning development and competing against established providers like Open AI, and Google AI, that let you create machine learning models, yet Inworld AI fulfills a unique gap in the market, as it provides a highly specialized solution for developing conversational AI for AI-driven virtual characters, rather than generic machine learning models.

At the same time, the AI solutions that Inworld AI is developing will enable virtual character creation that extends well beyond the complexity of AI-driven avatars like Pandora Bots and Soul Machines.

“Many existing companies have solutions that provide limited answers to script triggers and dialog. In fact, our team built one of the largest providers of such services (API.ai, acquired by Google and now known as Google Dialogflow) so we are very familiar with their capabilities,” Gelfenbeyn said.

“Other companies are beginning to experiment with new technologies (such as large language models) but we believe that these parts, while essential, only provide one piece of the stack necessary to really bring characters to life,” he said.

In other words, these solutions have only scratched the surface of human-AI interactions, and Inworld AI’s approach to replicate human cognition is designed to create much more intelligent virtual entities. While Inworld AI’s mission to build AI brains for virtual characters is ambitious, the team’s AI development pedigree speaks for itself.

Inworld AI’s founders include a swath of experts such as Gelfenbeyn who was previously the CEO of API.ai, chief technology officer Michael Ermolenko, who led machine learning development at API.ai and the Dialogflow NLU/AI team at Google, and product director Kylan Gibbs, who previously led product for applied generative language models at DeepMind.

With this experienced team, the organization is in a strong position to set the standard for interactive virtual characters. After all, “Widespread success of the metaverse and other immersive applications depends on how enveloping those experiences can be,” said Ilya Fushman, investment partner at Kleiner Perkins.

“Inworld AI is building the engine that enables businesses to provide that exciting depth of experience and captivate users. With the team’s track record in providing developers with the tools they need to build AI-fueled applications, we’re excited to support the company in building the future of immersive experiences,” Fushman explained.

Virtual characters are key for immersion

With the metaverse boom beginning to pick up steam, Inworld AI also has a unique role to play in providing providers with a toolset that they can use to create sophisticated virtual characters and create more compelling digital experiences for users. The level of immersion offered by these experiences will determine whether the metaverse lives or dies.

The types of experiences that developers can use Inworld AI to build are diverse. As Gelfenbeyn explained, “Immersive realities continue to accelerate, with an increasingly diverse and fascinating ecosystem of worlds and use cases.”

“Virtual spaces like Meta’s Horizon Worlds, Roblox, Fortnite, and others that offer unique experiences and enable users to exist in other worlds will also continue to see quick demand from businesses, offering everything from games to story content to new enterprise applications,” Gelfenbeyn said.

Although Gelfenbeyn noted that the technology is simply to enable providers to create a “native population” for the digital world to offer realistic experiences, the metaverse is also becoming a new channel that technical decision-makers can use to interact with customers in the future.

While complete, immersive realities with sophisticated virtual characters are a long way off, Inworld AI’s team’s knowledge of conversational AI will undoubtedly enable other providers to move closer toward building vibrant, virtually populated, and interactive digital worlds.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Walk the Great Wall of China in Google’s Latest Virtual Tour

If your pandemic-related precautions still prevent you from traveling but you’d like to take a trip somewhere far away, then how about diving into the latest virtual tour from Google Arts & Culture?

The Street View-style experience features a 360-degree virtual tour of one of the best-preserved sections of the Great Wall, which in its entirety stretches for more than 13,000 miles — about the round-trip distance between Los Angeles and New Zealand.

A section of China’s Great Wall. Google Arts & Culture

The new virtual tour includes 370 high-quality images of the Great Wall, together with 35 stories offering an array of architectural details about the world-famous structure.

“It’s a chance for people to experience parts of the Great Wall that might otherwise be hard to access, learn more about its rich history, and understand how it’s being preserved for future generations,” Google’s Pierre Caessa wrote in a blog post announcing the new content.

The wall was used to defend against various invaders through the ages and took more than 2,000 years to build. The structure is often described as “the largest man-made project in the world.”

But climate conditions and human activities have seen a third of the UNESCO World Heritage site gradually crumble away, though many sections of the wall are now being restored so that it can be enjoyed and appreciated for years to come.

Google Arts & Culture has been steadily adding to its library of virtual tours, which can be enjoyed on mobile and desktop devices. The collection includes the The Hidden Worlds of the National Parks and an immersive exploration of some the world’s most remote and historically significant places.

If you’re looking for more content along the same lines, then check out these virtual-tour apps that transport you to special locations around the world, and even to outer space.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Deepbrain boosts AI-powered virtual avatars with $44M raise

All the sessions from Transform 2021 are available on-demand now. Watch now.


Deepbrain AI (formerly Moneybrain), a conversational AI startup based in Seoul, South Korea, has raised $44 million in a series B round led by Korea Development Bank at a post-money valuation of $180 million. The capital will be used to expand the company’s customer base and operations globally, CEO Eric Jang said in a statement, with a particular emphasis on the U.S.

Deepbrain provides a range of AI-powered customer service products, but its focus is on “synthetic humans,” or human-like avatars that respond to natural language questions. Because the pandemic makes online meetups a regular occurrence, the concept of “virtual people” is gaining steam. Startups including Soul Machines, Brud, Wave, Samsung-backed STAR Labs, the AI Foundation, and Deepbrain aim to will a sort of “metaverse” into existence by pursuing AI techniques that can mimic the experience of speaking with a human being (for example, a support agent).

Deepbrain

Founded in 2016, Deepbrain offers video and speech synthesis and chatbot solutions to enterprise customers including MBN, Metro News, and LG HelloVision as well as KB Kookmin Bank and education service provider Kyowon. Using a combination of AI technologies, Deepbrain creates what it calls “AI humans,” or avatars that can respond to questions in a person’s voice.

To create an “AI human,” Deepbrain captures video of a human model in a studio and trains a machine learning system. The model is given a script to read, enabling the system to generate an avatar of the model with synchronized, true-to-life lip, mouth, and head movements.

AI-powered avatars

Jang points out that Deepbrain’s technology can improve virtual experiences while minimizing the need for costly video production. For example, the company is working with an education provider to build “AI human” tutors that will give lectures and answer students’ questions.  Separately, Deepbrain says it’s collaborating with a financial organization to create “AI bankers” that can direct customers to the right human bank personnel, potentially reducing employees’ workloads.

“[Our investors] understand the opportunity we have to enhance the customer experience and lead the growing contactless industry brought on by the pandemic,” Jang said in a press release. “This new investment is validation of our technology, strong business opportunity, and customer traction in key customer service-driven industries.”

Deepbrain

Above: Deepbrain’s dashboard.

Image Credit: Deepbrain

Some experts have expressed concern that tools like these could be used to create deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else’s likeness. The fear is that these fakes might be used to do things like sway opinion during an election or implicate a person in a crime. Deepfakes have already been abused to generate pornographic material of actors and defraud a major energy producer.

Deepbrain doesn’t make clear what protections it has in place to prevent abuse.  We’ve reached out to the company for comment.

Beyond Korea Development Bank, Deepbrain counts as investors IDG Capital China, CH & Partners, Donghun Investment, L&S Venture Capital, and Posco Tech Investment. The series B brings the company’s total raised to $52 million to date.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Fortnite: How to Attend Ariana Grande’s Virtual Concert

Ariana Grande is the latest icon making a “grande” entrance into the world of Fortnite. Like the latest Icon addition, LeBron James, Ariana will appear with skins, bling, and other accessories purchasable in Fortnite‘s in-game shop. The main draw is that she’ll be headlining the newest in-game concert titled the “Rift Tour” from August 6-8.

Fortnite‘s last big concert came from rapper, singer, and songwriter Travis Scott, in the form of Astronomical. The collaboration with Ariana Grande is the popular battle royale’s second go at an epic virtual concert experience on that level.

The concert is sure to attract a lot of public attention, from Fortnite-faithful to Ariana fans and those who just want to experience the event. Here’s what you need to know so you don’t miss your chance to catch the show.

Download the latest update

Before anything, you’re going to want to download Fortnite‘s v17.30 update. It’s important to install the update early — well before the actual concert — so you’re not stuck downloading an update while the show is happening. Without it, you won’t be able to attend.

RSVP for your concert(s) of choice

While the RSVP won’t ensure you a seat in the concert, it will act as a reminder as to when you should join the playlist. It’s not a necessary step, but a recommended one. You’ll be able to RSVP within Fortnite. When logging in, a menu will pop up that lets you check a date. There’s also an RSVP tab in the game, which you can click into to select a time.

You can choose from a multitude of dates and times to attend. You may also go to more than one show, if you wish. Here are the current showtimes.

  • Friday, August 6, at 6 p.m. ET
  • Saturday, August 7, at 2 p.m. ET
  • Sunday, August 8, at midnight ET
  • Sunday, August 8, at 10 a.m. ET
  • Sunday, August 8, at 6 p.m. ET

Log in to Fortnite an hour or more before your event time

Epic recommends loading up Fortnite 60 minutes before your selected concert set begins, as the Rift Tour playlist will go live 30 minutes ahead of each show. Of course, with so many players looking to attend thanks to Ariana Grande’s reach, you may want to log on even before that in case of potential server login errors from so many players.

Make sure you make time for multiple concerts if needed

It is a possibility you may be locked out of your show of choice, so you should prepare to possibly follow these steps for another date instead. Thankfully Epic is bringing multiple showings to Fortnite with one on August 6 and 7 respectfully, and three for August 8.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

The next Grid game uses the same virtual set tech as ‘The Mandalorian’

offered the first look at the next Grid game during . Grid Legends features a story mode that mashes together live-action performances and in-game action. Senior gameplay designer Becky Crossdale said Codemasters harnessed the same extended reality tech that was used to create the world of The Mandalorian.

In the story mode, you’ll be “front and center in a fly-on-the-wall documentary that captures every moment on and off the track” during the Grid World Series, according to . The cast includes Ncuti Gatwa (Netflix’s Sex Education).

You’ll be able to race in and upgrade more than 100 vehicles, including touring cars, big rigs, single seaters and stadium trucks. With the race creator, you can set up a showdown between a variety of mixed-class rides. There will also be more than 130 tracks to race on including real-life locations like Brands Hatch and Indianapolis and street routes in the likes of San Francisco, Paris, London and Moscow.

 is coming to PlayStation 4, PS5, Xbox One, Xbox Series X/S and PC in 2022. Codemasters plans to reveal more details in the coming months.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Intuit expanded userbase with AI assistants and virtual human experts

All the sessions from Transform 2021 are available on-demand now. Watch now.


When it comes to filing taxes, some people prefer to handle it all by themselves. Other people prefer to let the experts take care of everything. For the people somewhere in the middle, Intuit has a service called TurboTax Live, which utilizes AI to match customers with experts who will help guide them through the process.

“There’s really room for the idea of ‘do it with me’ and … you need some help and you want some guidance,’” Marianna Tessel, Intuit chief technology officer, said during a session at VentureBeat’s Transform 2021 summit.

There is more to the service beyond matching customers to experts based on scheduling. Intuit also factors in hundreds of attributes to find the right expert to address each customer’s unique needs. This application of AI allows the service to match customers with the best expert on hand within minutes via chat or video call.

The result? Intuit’s user base has increased by 70% over the past year. Customer service wait times decreased by 15%. Additionally, Intuit anticipates its user base increasing by another 90% this year.

Growth through AI

In response to a question from VentureBeat CEO Matt Marshall on how much of their success can be attributed to AI, Tessel acknowledged that while there was “no question” that there was a boost from people working remotely due to the pandemic, Intuit believes that most of the growth has happened because of the high quality and intrinsic convenience of their service, bolstered by AI.

Intuit invests into AI in three distinct fields:

  1. Knowledge engineering, which helps codify tax compliance rules into code so computers can help customers understand what information is needed and what the next step is.
  2. Machine learning, used extensively to help matchmake customers with experts and to help personalize products based on customer data.
  3. Natural language processing, so AI that can listen to the spoken words of customers and read written words, such as information on a tax document.

Tessel says that using all these fields in combination is how their AI can read a tax document, identify what type of document it is, and figure out what to do with the information on it.

When asked about lessons learned, Tessel emphasized the positive impact of engineering hygiene, asking the right questions when the numbers don’t look great and conducting root cause analyses. She also emphasized that while the migration to the cloud was difficult, not having to worry about managing infrastructure was a big boost for the company.

For Intuit, AI “is a machine and human collaboration, a lot more than we expected,” Tessel said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link