Categories
Game

Blizzard may have canceled a ‘World of Warcraft’ mobile spinoff (updated)

Arclight Rumble wasn’t going to be the only upcoming Warcraft mobile game, according to a report. Bloomberg sources claim Blizzard and NetEase have canceled a World of Warcraft spinoff mobile title that had been in development for three years. Nicknamed Neptune, it would have been a massively multiplayer game set in a different era of the fantasy universe. It wouldn’t simply have been a WoW phone port, to put it another way.

While the exact reasons for the cancelation weren’t mentioned, one of the insiders said Blizzard and NetEase “disagreed over terms” and ultimately decided to scrap the unannounced game. NetEase supposedly had over 100 developers attached to the project. The two were rumored to have previously canceled another Warcraft mobile release, a Pokémon Go-style augmented reality game, after four years of effort.

Spokespeople from both companies declined to comment. If the rumor is accurate, it suggests Blizzard is struggling to adapt to the rise of mobile gaming. While Diablo Immortal appears to be a success and is joining the well-established Hearthstone, the developers will still have sunk massive resources into other games that never reached players.

There are strong incentives to take these risks, however. Mobile games can be highly lucrative, particularly in countries like China — Genshin Impact has pulled in $3 billion since release, according to Sensor Tower estimates. A hit could easily boost Blizzard’s bottom line, not to mention spur demand for its existing computer- and console-bound games.

Update 8/5 9:49AM ET: Spokesperson Andrew Reynolds told Engadget that Blizzard still has an “extremely successful relationship” with NetEase, and said it was “entirely untrue” that there were any financial disagreements between the two companies. There was no mention of the spinoff or its current status.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Ubisoft will reveal updated ‘Skull and Bones’ gameplay this week

After years of delays, Ubisoft is finally ready to offer a fresh look at Skull and Bones. The publisher on Tuesday it plans to host a showcase dedicated to the open-world multiplayer pirate game on July 7th at 2PM ET. Ubisoft will livestream the event on and .

The company is widely expected to announce a new release date then. If you haven’t kept track of Skull and Bones’ troubled development, we don’t blame you. Ubisoft first announced the title at , with an original release window planned for 2018. It then pushed the game back to , , 2021 and then finally its fiscal .

In 2020, Elisabeth Pellen, the game’s creative director, the delays to a change of vision. The original premise of Skull and Bones was simple. It was supposed to adapt the sailing mechanics from 2013’s Assassin’s Creed IV: Black Flag and strip all the narrative threads and stealth mechanics that made that game unapproachable for some.

“The answer is that we simply needed more time. We dreamt something bigger for Skull and Bones, and these ambitions naturally came with bigger challenges,” Pellen said at the time. “As Skull and Bones evolved from its original idea to what it is now, it was also necessary to have some fresh eyes join the team.” Ubisoft also announced today it plans to hold a separate event on September 10th that will feature multiple games and projects from the company’s teams.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Baidu debuts updated AI framework and R&D initiative

Elevate your enterprise data technology and strategy at Transform 2021.


At Wave Summit, Baidu’s bi-annual deep learning conference, the company announced version 2.1 of PaddlePaddle, its framework for AI and machine learning model development. Among the highlights are a large-scale graph query engine; four pretrained models; and PaddleFlow, a cloud-based suite of machine learning developer tools that include APIs and a software development kit (SDK). Baidu also unveiled what it’s calling the Age of Discovery, a 1.5 billion RMB (~$235 million) grant program that will invest over the next three years in AI education, research, and entrepreneurship.

At Wave Summit, Baidu CTO Haifeng Wang outlined the top AI trends from the company’s perspective. Deep learning with knowledge graphs has significantly improved the performance and interpretability of models, he said, while multimodal semantic understanding across language, speech, and vision has become achievable through graphs and language semantics. Moreover, Wang noted, deep learning platforms are coordinating closely with hardware and software to meet various development needs, including computing power, power consumption, and latency.

To this end, PaddlePaddle 2.1 introduces optimization of automatic mixed precision, which can speed up the training of models — including Google’s BERT — by up to 3 times. New APIs reduce memory usage and further improve training speeds, as well as adding support for data preprocessing, GPU-based computation, mixed-precision training, and model sharing.

Also in tow with PaddlePaddle 2.1 are four new language models built from Baidu’s ERNIE. ERNIE, which Baidu developed and open-sourced in 2019, learns pretrained natural language tasks through multitask learning, where multiple learning tasks are solved at the same time by exploiting commonalities and differences between them. Beyond this, PaddlePaddle 2.1 brings an optimized pruning compression technology called PaddleSlim, as well as LiteKit, a toolkit for mobile developers that aims to reduce the development costs of edge AI.

PaddlePaddle Enterprise and Age of Discovery

PaddlePaddle Enterprise, Baidu’s business-oriented set of machine learning tools, gained a new service this month in PaddleFlow. PaddleFlow is a cloud platform that provides capabilities for developers to build AI systems, including resources management and scheduling, task execution, and service deployment via developer APIs, a command-line client, and an SDK.

In related news, Baidu says that as a part of its new Age of Discovery initiative, the company will invest RMB 500 million ($78 million) in capital and resources to support 500 academic institutions and train 5,000 AI tutors and 500,000 students with AI expertise by 2024. Baidu also plans to pour RMB 1 billion ($156 million) into 100,000 businesses for “intelligent transformation” and AI talent training.

Laments over the AI talent shortage have also become a familiar enterprise refrain. O’Reilly’s 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing this as a “significant” barrier. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working on AI development and research globally, only 25% are “well-versed enough in the technology to work with teams to take it from research to application.”

“PaddlePaddle researchers and developers will collaborate with the open source community to build a deep learning open source ecosystem and break the boundaries of AI technology,” Baidu said in a press release. “With the permeation of AI across various industries, it is critical for platforms to keep lowering their threshold to accelerate intelligent transformation.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Pokemon GO Eevee evolution names and tricks: Updated for Sylveon!

The latest version of Pokemon GO with an update from Niantic showed how Sylveon will finally appear as an evolution of Eevee. This is the latest of 8 possible evolutions of Eevee, including Flareon, Jolten, Vaporeon, Umbreon, Espeon, Glaceon, and Leafeon – and now Sylveon. One path to evolution requires that you name the Pokemon a special name – the other requires that you keep an Eevee as a buddy!

Eevee evolution name trick

There’s a total of 8 names now that users can employ to evolve Eevee to the Pokemon of their choice. *At the moment at which this article is set to be published, the Sylveon name is NOT YET ACTIVE – but will be soon. It’ll only be AFTER Sylveon is activated in the game in late May that the Sylveon name trick will work.

• Pyro evolves into Flareon
• Sparky evolves into Jolteon
• Rainer evolves into Vaporeon
• Tamao evolves into Umbreon
• Sakura evolves into Espeon
• Linnea evolves into Leafeon
• Rea evolves into Glaceon
• Kira evolves into Sylveon*

The process is simple. You’ll have an Eevee, change the name of the Eevee to one of the names on the list on the left. Evolve said Pokemon and, BANG, it’ll be the Pokemon indicates on the right. Make sure you have the capital letter at the beginning of the name just to be sure!

Note also that you will only be able to use the name trick ONCE for each of the 8 evolutions of Eevee. Name an Eevee Pyro, evolve that Pokemon into Flareon, and no further Eevee named Pyro will have a guaranteed evolution into Flareon.

It’s still true that if you evolve an Eevee without any special trick, you’ll get one of three Pokemon: Flareon, Jolteon, or Vaporeon. The rest require a process or the name, for sure.

Eevee to Umbreon or Espeon

To evolve into Umbreon or Espeon, keep an Eevee as a buddy and walk this Eevee for 10km. Once you’ve reached 10km, your Eevee should show one of two silhouettes in its Evolve button. If it is daylight hours, you should see an Espeon. If it is nighttime hours, you should see an Umbreon. If you evolve your Eevee during the day, you’ll get an Espeon, if you evolve at night, you’ll get an Umbreon.

This process will work as many times as you like. You could potentially evolve hundreds of Eevee into Umbreon if you’re willing to walk every one of them 10km and evolve in the daytime.

Eevee to Leafeon and Glaceon

The process for attaining a Leafeon or Glaceon is slightly more one-time-only than the process for Umbreon and Espeon. To get a Leafeon or Glaceon, you’ll need a Mossy Lure or a Glacial Lure.

Attach a Mossy Lure to a Pokestop and check the Eevee you wish to evolve. If you are within range of the Pokestop with the lure, you should see the silhouette on your Eevee’s evolve button appear as Leafeon.

Attach a Glacial Lure to a Pokestop if you want to evolve an Eevee into Glaceon. You will only be able to evolve ONE Eevee into a Glaceon, and ONE Eevee into a Leafeon with this Lure process.

Eevee to Sylveon

The Sylveon evolution process requires that you have an Eevee as a Buddy Pokemon and attain a requisite number of hearts. To do this, you’ll need to play with your Eevee, walk with your Eevee, feed food do your Eevee, and so forth. Once you have enough hearts, your Eevee’s evolve button should change to show a silhouette of Sylveon.

AGAIN, note that this process will only start to work after Sylveon is released into the game. The release date for Sylveon in Pokemon GO is Tuesday, May 25th, 2021, at 10AM local time. This is the start of the Luminous Legends Y event (Part 2), and Sylveon will continue to appear in the game from this point forward!

Repost: Original Source and Author Link

Categories
AI

Google’s updated Voice Access leverages AI to detect in-app icons

Google today launched an updated version of Voice Access, its service that enables users to control Android devices using voice commands. It leverages a machine learning model to automatically detect icons on the screen based on UI screenshots, enabling it to determine whether elements like images and icons have accessibility labels, or labels provided to Android’s accessibility services.

Accessibility labels allow Android’s accessibility services to refer to exactly one on-screen element at a time, letting users know when they’ve cycled through the UI. Unfortunately, some elements lack labels, a challenge the new version of Voice Access aims to address.

A vision-based object detection model called IconNet in the new Voice Access (version 5.0) can detect 31 different icon types, soon to be extended to more than 70 types. As Google explains in a blog post, IconNet is based on the novel CenterNet architecture, which extracts app icons from input images and then predicts their locations and sizes. Using Voice Access, users can refer to icons detected by IconNet by their names, e.g., “Tap ‘menu.”

To train IconNet, Google engineers collected and labeled more than 700,000 app screenshots, streamlining the process by using heuristics, auxiliary models, and data augmentation techniques to identify rarer icons and enrich existing screenshots with infrequent icons. “IconNet is optimized to run on-device for mobile environments, with a compact size and fast inference time to enable a seamless user experience,” Google Research software engineers Gilles Baechler and Srinivas Sunkara wrote in a blog post.

Google says that in the future, it plans to expand the range of elements supported by IconNet to generic images, text, and buttons. It also plan to extend IconNet to differentiate between similar-looking icons by identifying their functionality. Meanwhile, on the developer side, Google hopes to increase the number of apps with valid content descriptions by improving tools to suggest content descriptions for different elements when building applications.

Above: IconNet analyzes the pixels of the screen and identifies the centers of icons by generating heatmaps, which provide precise information about the position and type of the different types of icons present on the screen.

“A significant challenge in the development of an on-device UI element detector for Voice Access is that it must be able to run on a wide variety of phones with a range of performance performance capabilities, while preserving the user’s privacy,” wrote Google Research software engineers Gilles Baechler and Srinivas Sunkara in a blog post. “We are constantly working on improving IconNet.”

Voice Access, which launched in beta in 2016, dovetails with Google’s other mobile accessibility efforts. The company is continuing to develop Lookout, an accessibility-focused app that can identify packaged foods using computer vision, scan documents to make it easier to review letters and mail, and more. There’s also Project Euphonia, which aims to help people with speech impairments communicate more easily; Live Relay, which uses on-device speech recognition and text-to-speech to let phones listen and speak on a person’s behalf; and Project Diva, which helps people give the Google Assistant commands without using their voice.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link