Categories
Game

People spent much less time watching gaming streams this spring, report says

The number of hours streamed and watched across , and have dropped significantly over the last year, according to the latest Streamlabs and Stream Hatchet on the landscape of livestreaming. Between April and June, streamers on the three platforms were live for 273 million hours. That’s down 19.4 percent from Q2 2021 and 12 percent from the previous quarter.

Viewers tuned in to streams for 7.36 billion hours across the three platforms last quarter. That’s a drop of 18.1 percent year over year (viewership was at 8.99 billion hours in Q2 2021) and 8.4 percent from the previous quarter. The slowdown for all three platforms could be a case of people spending more time outside than they did last year for pandemic-related reasons.

Twitch is still by far the biggest player among the three platforms, with 76.7 percent of market share in terms of hours watched (5.64 billion) and 92.7 percent of hours streamed (204.2 million). Those figures dropped by 13.4 percent and 16 percent from Q2 2021. The number of unique channels streaming on the platform dropped by nearly 2 million to 9.6 million as well.

However, Twitch’s Just Chatting category continues to go from strength to strength. Hours watched there actually grew by 2.2 percent from the previous quarter, giving the category its highest ever viewership. The most-watched categories after that were (465 million hours) and (464 million).

YouTube Gaming viewership actually remained steady from the previous quarter, though it dropped 13.1 percent from Q2 2021 to 1.13 billion hours. The total hours streamed dropped by 9.6 percent year over year to 8.05 million.

Facebook Gaming suffered a bigger setback, per the report, despite Meta’s efforts to court creators. The number of hours watched fell by a whopping 51 percent from a year ago to 580 million. There was an even bigger drop in terms of hours streamed, from 20.8 million in Q2 2021 to 7.9 million last quarter — a decline of 62 percent.

Perhaps we’ll soon start seeing some of those numbers creep up again, though. With a recession looming, folks may spend more time indoors again, tuning back into streamers they enjoyed watching during the first 18 months or so after COVID-19 took hold.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Twitch’s latest test lets you preview channels without watching ads

Twitch has begun testing a new feature that could introduce you to great streamers you haven’t seen before. Channel Switcher shows random channels as a carousel at the bottom of the screen. When you click on any of them, you’ll be able to watch a one-minute preview of the streamer’s content, enough to give you an idea of what they offer. The previews have no ads either, so you can channel surf undisturbed until you find something to watch. As Twitch explains, the feature will make it easier to figure out if you like a specific channel before committing.

A Twitch spokesperson told The Verge that “only a small percentage of [randomly selected] users who are logged in” will get the chance to test out the feature. The company plans to end the test in July and then analyze its results. While it’s unclear if Channel Switcher will get a wide release at this point, the spokesperson told the publication that Twitch intends to roll out future iterations and is thinking of offering it as an opt-in discovery solution. 

Alongside Channel Switcher, Twitch also launched Guest Star, which allows up to five guests to join a host in a stream. It works similar to Clubhouse in that streamers can include other streamers and viewers in their broadcast, but it of course supports video and not just audio conversations. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
AI

Adobe CIO talks about watching users to enhance customer experience

All the sessions from Transform 2021 are available on-demand now. Watch now.


Technology should be harnessed to enable users to do things for themselves and to improve user experience, Cynthia Stoddard, chief information officer of Adobe, said during VentureBeat’s Transform 2021 virtual conference.

“I love technology, because I love the impact that it has on businesses and [on how people work],” Stoddard told Noelle Silver, founder of the AI Leadership Institute. “[The] outcomes and the contributions that we can make by some of the most simple changes and applications of technology are just amazing.”

The past year has illustrated the importance of giving people the tools that they need to be productive, so that they can do their jobs while simultaneously managing their family lives. Stoddard described the idea of giving people the tools they need as self-service in business, similar to how self-service in the cloud lets users take care of certain tasks without waiting for IT.

The data-platform-as-a-service is a good example of giving people what they need, Stoddard said. Different people on Stoddard’s team had varying levels of comfort when it came to accessing data. The data scientists wanted to do everything themselves, and others had no interest in dealing with formulas. The pre-built services gave business users the flexibility to download the data and create their own formulas to create their own data views, or just load a dashboard and nothing else.

“We’ve created the data-platform-as-a-service that really caters to all the different personas,” Stoddard said. “You can have a dashboard, or you can download [the data] and create your own data mart.”

Customer experience is key

A valuable way to improve customer experience is to get close to the customer, Stoddard said. At Adobe, the team puts themselves in the shoes of those customers to understand the pain points of using Adobe products and services. The kind of help the user would need, or the kind of tool they would prefer, would vary depending on whether the user was a power user or a more mainstream one. A team looks at the experience — not the user interface — from end-to-end to understand how people of different skill levels are engaging with Adobe products.

“Can you help more with customer support? Can we inject some new features into the total product? Can we inject product help?” Stoddard asked. She listed examples of many different results that come out of analyzing customer journeys, which the company can incorporate into enhancing the user experience. In addition, she said that considering diversity in the customer’s workforce also helps Adobe to make sure the help they provide is specific to the group that needs it, especially in cases when it might be different from the mainstream design trend.

“We’ve invested a lot in experience,” Stoddard said.

Learning as the driving force

“Innovation can come from anywhere,” Stoddard said, adding that’s what her team and organization believes in. Within the organization, teams have embraced a lot of open sources to work on automation. Members work together to see how they can take artificial intelligence and machine learning and apply it to the company’s framework products. The team looked into particular areas for the product to self-heal and attempt to use automation to fix it. “It used to take 20 minutes-plus. They take seconds or minutes now,” Stoddard said.

Going forward, Stoddard sees her CIO role expanding and continuing to add value to the team. Her advice for others in the tech industry is to keep learning from their peers. “The connections really help you get connected to the industry in many different ways,” she said. She believes it’s important to stay connected both within the organization and to other industries, since that will bring new ideas to people and teach them how to do things differently.

Meanwhile, Stoddard prides herself on being a mentor to her team. “I love seeing them grow, get broader, and get deeper,” she said. “The mentoring aspect, and helping people be successful, makes me smile.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

This AI system learned to understand videos by watching YouTube

Elevate your enterprise data technology and strategy at Transform 2021.


Humans understand events in the world contextually, performing what’s called multimodal reasoning across time to make inferences about the past, present, and future. Given text and an image that seem innocuous when considered apart — e.g., “Look how many people love you” and a picture of a barren desert — people recognize that these elements take on potentially hurtful connotations when they’re paired or juxtaposed, for example.

Even the best AI systems struggle in this area. But there’s been progress, most recently from a team at the Allen Institute for Artificial Intelligence and the University of Washington’s Paul G. Allen School of Computer Science & Engineering. In a preprint paper published this month, the researchers detail Multimodal Neural Script Knowledge Models (Merlot), a system that learns to match images in videos with words and even follow events globally over time by watching millions of YouTube videos with transcribed speech. It does all this in an unsupervised manner, meaning that the videos haven’t been labeled or categorized — forcing the system to learn from the videos’ inherent structures.

Learning from videos

Our capacity for commonsense reasoning is shaped by how we experience causes and effects. Teaching machines this type of “script knowledge” is a significant challenge, in part because of the amount of data it requires. For example, even a single photo of people dining at a restaurant can imply a wealth of information, like the fact that the people had to meet up, agree where to go, and enter the restaurant before sitting down.

Merlot attempts to internalize these concepts by watching YouTube videos. Lots of YouTube videos. Drawing on a dataset of 6 million videos, the researchers trained the model to match individual frames with a contextualized representation of the video transcripts, divided into segments. The dataset contained instructional videos, lifestyle vlogs of everyday events, and YouTube’s auto-suggested videos for popular topics like “science” and “home improvement,” each selected explicitly to encourage the model to learn about a broad range of objects, actions, and scenes.

Merlot AI

The goal was to teach Merlot to contextualize the frame-level representations over time and over spoken words, so that it could reorder scrambled video frames and make sense of “noisy” transcripts — including those with erroneously lowercase text, missing punctuation, and filler words like “umm,” “hmm,” and “yeah.” The researchers largely accomplished this. They that in a series of qualitative and quantitative tests, Merlot had a strong “out-of-the-box” understanding of everyday events and situations, enabling it to take a scrambled sequence of events from a video and order the frames to match the captions in a coherent narrative, like people riding a carousel.

Future work

Merlot is only the latest work on video understanding in the AI research community. In 2019, researchers at Georgia Institute of Technology and the University of Alberta created a system that could automatically generate commentary for “let’s play” videos of video games. More recently, researchers at Microsoft published a preprint paper describing a system that could determine whether statements about video clips were true, by learning from visual and textual clues. And Facebook has trained a computer vision system that can automatically learn audio, textual, and visual representations from publicly available Facebook videos.

Merlot AI

Above: Merlot can understand the sequence of events in videos, as demonstrated here.

The Allen Institute and University of Washington researchers note that, like previous work, Merlot has limitations, some owing to the data selected to train the model. For example, Merlot could exhibit undesirable biases because it was only trained on English data and largely local news segments, which can spend a lot of time covering crime stories in a sensationalized way. It’s “very likely” that training models like Merlot on mostly news content could cause them to learn racist patterns as well as sexist patterns, the researchers concede, given that the most popular YouTubers in most countries are men. Studies have demonstrated a correlation between watching local news and having more explicit, racialized beliefs about crime.

For these reasons, the team advises against deploying Merlot into a production environment. But they say that Merlot is still a promising step for future work in multimodal understanding. “We hope that Merlot can inspire future work for learning vision+language representations in a more human-like fashion compared to learning from literal captions and their corresponding images,” the coauthors wrote. “The model achieves strong performance on tasks requiring event-level reasoning over videos and static images.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This $110 Android tablet is perfect for watching Netflix, surfing the web, and more

TLDR: The Vankyo MatrixPad Z10 has 1080p resolution, a lightning-fast processor and everything a tablet needs to compete with the iPad — at almost a third of the Apple price.

While a 2020 spend indoors probably wasn’t your idea of a good time, there were some who took the COVID-inspired lemons and made lemonade. One of those beneficiaries were tablet makers, who saw sales on those portable workhorses soar almost 14 percent last year as everybody settled into working from home.

And of course, that was a huge boon to the top tablet manufacturer Apple, who shipped over 52 million iPad tablets in 2020. Of course, if you want an iPad, it’s going to cost you. Never one to lower prices, even during a global pandemic, the cost of a new Apple 10-inch iPad isn’t much lower than it’s been over the past decade at around $329.   

But many Apple competitors have stepped up to the challenge, crafting tablets that meet and, in some cases, exceed their Cupertino counterparts, often for a much lower price. 

Vankyo is one of those brands in the heat of battle — and their Vankyo MatrixPad Z10 isn’t just ready to trade functionality blows with the iPad. Its price, currently just $109.99 from TNW Deals, beats it like a rented mule.

The MatrixPad Z10 is offering all the major selling points that most tablet shoppers demand. The 10.1-inch IPS display boasts 1080p resolution for true HD quality images for everything from video to pictures to web experiences.

Under the hood, the Android-based Z10 is rocking a sporty MediaTek MT8163 quad-core processor with 3GB RAM for some real pop. Armed with that kind of computing power, it’s more than capable of handling any task that an iPad could take on, from running apps simultaneously to fast downloads for quick app launches, smooth gameplay, buffer-free song and video playback, and more.

In addition to winning features like a 13MP camera, WiFi and Bluetooth connectivity, dual stereo speakers, and a spacious 32GB of on-board memory, the Z10 also brings a few winning points to the table the iPad doesn’t have.

Like its eye health and reading mode, which automatically turns the display monochromatic by sampling the ambient light, turning the screen slightly yellowish to protect your eyes from eyestrain with a true paper-like reading experience. There’s also hassle-free screen sharing that instantly mirrors video or audio from your tablet right to your living room TV or other big monitors for full family viewing.

And yeah…there’s that price too. Along with all those iPad-worthy comparisons, the Vankyo MatrixPad Z10 smokes Apple with its price, just $109.99, a savings of more than 20 percent off its retail price with this current offer.

Prices are subject to change.



Repost: Original Source and Author Link

Categories
AI

Nvidia’s AI recreates Pac-Man from scratch just by watching it being played

Nvidia is best known for its graphics cards, but the company conducts some serious research into artificial intelligence, too. For its latest project, Nvidia researchers taught an AI system to recreate the game of Pac-Man simply by watching it being played.

There’s no coding involved, no pre-rendered images for the software to draw on. The AI model is simply fed visual data of the game in action along with the accompanying controller inputs and then recreates it frame by frame from this information. The resulting game is playable by humans, and Nvidia says it will be releasing it online in the near future.

The AI version is by no means a perfect facsimile, though. The imagery is blurry and it doesn’t seem like the AI managed to capture the exact behavior of the game’s ghosts, each of which is programmed with a specific personality that dictates its movement. But the basic dynamics of Pac-Man are all there: eat pellets, avoid ghosts, and try not to die.

“It learns all of these things just by watching,” Nvidia’s Rev Lebaredian, vice president of simulation technology, told journalists in a briefing. “[It’s] similar to how a human programmer can watch many episodes of Pac-Man on YouTube and infer what the rules of the games are and reconstruct them.”

Lebaredian said the work had been done in collaboration with Pac-Man’s creator, Bandai Namco, which is celebrating the 40th anniversary of the arcade classic today.

The AI-generated Pac-Man is a little blurry, but all the basics are there.
Image: Nvidia

Nvidia says work like this shows how artificial intelligence will be used for game design in the future. Developers can input their work into the AI and use it to create variations or maybe design new levels. “You could use this to mash different games together,” Sanja Fidler, director of Nvidia’s Toronto research lab, told journalists, “giving additional power to games developers by [letting them] blend together different games.”

Creating AI that can learn the rules of a virtual world just by watching it in action also has implications for tasks like programming robots. “Eventually we’d like it to learn the rules of the real world,” says Lebaredian. The AI might watch videos of robotics trolleys navigating a warehouse, for example, and use that information to design navigation software of its own.

The program that recreated Pac-Man is called GameGAN. GAN stands for generative adversarial network and is a common architecture used in machine learning. The basic principle of a GAN is that it works in two halves. The first half of the GAN tries to replicate the input data, while the second half compares this to the original source. If they don’t match, the generated data is rejected and the generator tweaks its work and resubmits it.

AI systems like this could be used to train warehouse robots like the one above, which is powered by Nvidia’s hardware and software.
Image: Nvidia

Using AI to generate virtual worlds like video games has been done before. But Nvidia’s researchers introduced several new aspects, including a “memory module” that allowed the system to store an internal map of the game world. This leads to greater consistency in the game world, a key characteristic when recreating the mazes of Pac-Man. They also allow for the static elements of the game world (like the maze) to be separated from the dynamic ones (like the ghosts), which suits the company’s goal of using AI to generate new levels.

David Ha, an AI researcher at Google who’s worked on similar tasks, told The Verge that the research was “very interesting.” Earlier teams have tried to recreate game worlds using GANs, said Ha, “but from what I know, [this] is the first to demonstrate good results.”

“All in all, a very exciting paper, and I look forward to see more developments using this approach,” said Ha.

Some elements of the process definitely need tweaking, though, and demonstrate the particular fragility of artificial intelligence when learning new tasks. Fidler told journalists that to recreate Pac-Man, GameGAN had to be trained on some 50,000 episodes. Getting that gameplay data from humans wasn’t feasible, so the team used an AI agent to generate the data. Unfortunately, the AI agent was so good at the game that it hardly ever died.

“That made it hard for the AI trying to recreate the game to learn the concept of dying,” says Fidler. Instead, in early versions of the AI-generated Pac-Man, GameGAN tweaked the game so that ghosts never actually reached the title character but trail directly behind it like baby ducks following a parent. “It’s a funny effect of the way we trained it,” says Fidler.

Repost: Original Source and Author Link