Categories
Game

‘Hyenas’ is a team shooter from the creators of ‘Alien: Isolation’

Creative Assembly is best known for deliberately-paced games like Alien: Isolation and the Total War series, but it’s about jump headlong into the multiplayer action realm. The developer is partnering with Sega to introduce Hyenas, a team-based shooter coming to PS5, PS4, Xbox Series X/S, Xbox One and PCs in 2023. The title takes its cue from tech headlines, but also doesn’t take itself (or its gameplay mechanics) too seriously.

You join three-person teams to raid spaceship shopping malls for the coveted merch left behind by Mars billionaires. You’ll have to compete against four other loot-seeking teams while simultaneously dealing with security systems, hired goons and zero-gravity. You can not only flip gravity on and off, but use bridge-making goo and other special abilities to claim the upper hand. And yes, it’s pretty silly — you can expect appearances from Richard Nixon masks, Sonic the Hedgehog merch and Pez dispensers.

The creators are currently accepting sign-ups for a closed alpha test on PCs. They’ve also made clear there will be no “pay to win” systems. While that suggests you might have the option of buying cosmetic items, your success should depend solely on talent. It’s just a question of whether Hyenas will be good enough to pry gamers away from multiplayer shooter mainstays like the Call of Duty series or Fortnite.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

‘Layers of Fears’ from Bloober Team hits PC and consoles in 2023

Bloober Team is returning to its roots with Layers of Fears, a “psychological horror chronicle” heading to PlayStation 5, Xbox Series X and S, and PC in early 2023. The game is a new story chapter in the Layers of Fear universe, building on the spooky psychedelic foundation laid out in the previous installments.

“We are bringing back a franchise that is really special for us, in a new form that will give players a truly fresh gaming experience and that will shed new light on the overall story,” Bloober Team CEO Piotr Babieno said in a press release. “Our plan was to recreate the games, but we didn’t want to make it a simple collection of two remastered games. We’ve worked out a new approach, something that is maybe not yet obvious. But I can tell you there’s a reason why we called it Layers of Fears.”

Bloober Team launched its original horror franchise in 2016 with Layers of Fear and an expansion subtitled Inheritance. A full sequel came out in 2019, and throughout the years Bloober Team has partnered with major studios to create spooky games including Blair Witch and The Medium.

Last year, Bloober Team entered into a partnership with Konami, the publisher of the Silent Hill franchise, fueling rumors that the studio was working on a remake of Silent Hill 2. These rumors came to a head just before the Summer Game Fest kickoff show this year — but turns out, it was Layers of Fears all along. The studio is reportedly working on multiple games simultaneously, so there’s still a chance for Bloober Team to get in on the Silent Hill franchise.

Bloober Team is co-developing Layers of Fears with Anshar Studios, which also helped out with Observer: System Redux.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Microsoft and KPMG team up to bring Azure Quantum to more enterprises

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Today, Microsoft announced that KPMG is now an Azure Quantum Systems Integrator for Azure Quantum.  The organizations plan to collaborate to use quantum-inspired optimization (QIO) algorithms to identify new ways for Azure Quantum customers to leverage quantum algorithms and solve optimization challenges.

“KPMG professionals are working with Microsoft to explore the applications of quantum and quantum-inspired solutions, using Azure Quantum — the world’s first full-stack public cloud ecosystem for quantum solutions — to link business needs to technology capabilities,” Krysta Svors, general manager of Microsoft Quantum, told VentureBeat.

KPMG has a team dedicated to quantum-related technologies. The two companies expect that team to educate organizations on how to apply quantum-inspired optimization to business problems in industries including financial services, energy, and health care. (Quantum-inspired optimization refers to running quantum computing algorithms on classical hardware.)

What is Azure Quantum and how far off is quantum computing? 

Azure Quantum, is a solution that provides organizations and developers with remote access to quantum software and hardware, including a Microsoft Quantum Development Kit (QDK), that they can use to develop and run quantum computing programs on classical computing systems like GPU, CPU, and FPGA in the cloud.

The solution enables organizations to access quantum computing solutions with immense processing power that can process data sets and computational problems much faster than a classical computer and solve previously unsolvable optimization challenges.

For instance, a finance organization can use quantum computing to simulate market trends, develop more sophisticated investment insights, and reduce risk in a way they couldn’t with the limited processing power of classical computing hardware.

“The Azure Quantum platform allows us to explore numerous different solver approaches utilizing the same code, helping to minimize re-work and improve efficiency. The shared goal for these initial projects is to build solution blueprints for common industry optimization problems using Azure Quantum, which we can then provide to more clients at scale,” said Bent Dalager, the global head of KPMG’s Quantum Hub.

While quantum computing has the potential to optimize business processes, many critics believe that the technology is a long way off from being able to help decision-makers solve problems that are unique to their businesses.

“Many believe that quantum computers are a decade away from being useful, but the reality is that technology that emulates quantum principles, classical “quantum-inspired” technology, is available today and has the potential to make a significant difference for certain industries,” Svore said.

“Microsoft customers are harnessing quantum computing technology, by building quantum solutions and running them on multiple quantum hardware systems with little to no change in code, or using quantum-inspired solutions deployed on classical hardware to fundamentally change how they solve their challenging problems today while preparing for scaled quantum computing of the future,” he added.

Building a quantum ecosystem 

Since its announcement at Ignite in 2019, Azure Quantum has emerged as one of the top cloud-based quantum computing solution providers, with organizations from Ford to Trimble and OTI Lumionics using the solution to build quantum-inspired optimization solutions that can do anything from finding the most efficient routes for fleet vehicles to financial modeling, and materials design.

As of 2021, researchers estimate that the value of the global quantum computing market will reach  $487.4 million in 2021 and reach $3728.4 million by 2030.

It’s also a key reason why the quantum computing industry is highly competitive, with tech giants like Google, Amazon, and IBM also competing for dominance in the space, each offering its own solutions for providing managed access to quantum resources, via Google Quantum AI, to IBM Quantum, and Amazon Braket.

This year, IBM Quantum announced IBM Quantum System Two at the 2021 IBM Quantum Summit, while Amazon announced the opening of the AWS Center for Quantum Computing in Pasadena, California, where the organization aims to build a fault-tolerant quantum computer.

While the quantum computing industry is competitive, Microsoft is aiming to differentiate Azure Quantum from competitors by building a full-stack quantum ecosystem. “We are focused on innovation at every layer of the quantum stack, from the applications and solutions to the cryogenic control down to the qubits themselves,” Svors explained.

“A critical aspect of this full-stack ecosystem is expertise in integrating and customizing new technologies for customer solutions. Our partnership with KPMG brings additional expertise to our ecosystem and will enable customers to realize quantum and quantum-inspired solutions more rapidly,” Svors concluded.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Shadow Lugia in Pokemon GO: Giovanni returns with Team Rocket

On November 9, 2021, right as the clock strikes 12:01 in the morning, a Team GO Rocket event was scheduled to begin in Pokemon GO. This was an event called “With Light Comes Shadow…” as named after one of the several phrases a Team GO Rocket grunt might say before entering a battle. This event is special as it brings Giovanni back to the game after an extended absence, along with Shadow Lugia!

Shadow Lugia

If you are fighting Lugia, you’re going to be taking on a mainly Psychic Pokemon with a bit of Flying-type action. This Pokemon is weak to Ghost, Dark, Electric, Ice, and Rock-type moves. As such, you’re going to want to bring out Pokemon like Gengar, Magnezone, Weavile, Raikou, Gyarados, Manectric, Darkrai, Tyranitar (Smack Down and Crunch), and even Hoopa!

Once you capture a Shadow Lugia, you’ll do well to make sure it has Dragon Tail as its Fast Move – or teach it said move for optimal battle action. The best Charged Move a Shadow Lugia can have is certainly Aeroblast. Teach that monster this move with a Charged TM, for sure. Future Sight comes in at a close second.

Giovanni Encounter

If you have a Super Rocket Radar, you’ll have the opportunity to find and battle Giovanni. If you do not have a full Super Rocket Radar, you’ll need to complete some Special Research to attain said Radar. In the past, Niantic planned on releasing a bit of Giovanni Rocket Special Research at the start of each month – that didn’t pan out. Now that we’re seeing Giovanni return to the game, Niantic might be indicating that they’re ready to try a month-to-month launch sort of deal – we shall see!

If you want to fight any of the other Rocket leaders, you’ll need to go battle a bunch of Team GO Rocket Grunts until you get enough parts of a Rocket Radar. One Rocket Radar will grant you access to battle Cliff, Sierra, or Arlo. The Pokemon teams you’ll be battling with these members of Team GO Rocket are new. If you’ve been attempting to take down any of the Team GO Rocket leaders in the past, their current lineup might be surprising!

Arlo will start with Shadow Gligar, for sure. The second Pokemon could be Shadow Marwile, Shadow Lapras, or a third Pokemon (not yet revealed). The third Pokemon Arlo battles with could be Shadow Scizor or one of two other mystery Pokemon.

Cliff will be rolling with Shadow Grimer for the first toss, followed by Shadow Venusaur (or one of two other mystery Pokemon). The third Pokemon in Cliff’s team could be either Shadow Tyranitar or one of two other mystery Pokemon.

Sierra’s team is slightly less of a mystery than the rest – shell have Shadow Nidoran for the start, followed by Shadow Beedril, Shadow Vileplume, or Shadow Slowbro. The third Pokemon on Sierra’s team will be Shadow Houndoom, Shadow Marowak, or a third mystery Pokemon.

Battling the rest of Giovanni’s team will require that you take on Shadow Persian first. The second Pokemon will be Shadow Rhyperior, Shadow Kingler, or a third mystery Pokemon. The final Pokemon you’ll battle with Giovanni will, of course, be Shadow Lugia.

Repost: Original Source and Author Link

Categories
AI

NeuReality and IBM team up to develop AI inference platforms

[Updated 5:44am PST]

NeuReality, an Israeli-based semiconductor company developing high-performance AI inference technology, has signed an agreement with IBM to develop the technology.

The technology aims to deliver cost and power consumption improvements for deep learning use cases of inference, the companies said. This development follows NeuReality’s emergence from stealth earlier in February with an $8 million seed round to accelerate AI workloads at scale.

AI inference is a growing area of focus for enterprises, because it’s the part of AI where neural networks actually are applied in real application and yield results. IBM and NeuReality claim their partnership will allow the deployment of computer vision, recommendation systems, natural language processing, and other AI use cases in critical sectors like finance, insurance, healthcare, manufacturing, and smart cities. They also claim the agreement will accelerate deployments in today’s ever-growing AI use cases, which are already deployed in public and private cloud datacenters.

NeuReality has competition in Cast AI, a technology company offering a platform that “allows developers to deploy, manage, and cost-optimize applications in multiple clouds simultaneously.” Some other competitors include Comet.ml, Upright Project, OctoML, Deci, and DeepCube. However, this partnership with IBM will see NeuReality become the first start-up semiconductor product member of the IBM Research AI Hardware Center and a licensee of the Center’s low-precision high performance Digital AI Cores.

VentureBeat connected via email with Moshe Tanach, CEO and co-founder of NeuReality, to get a broader view on the direction of this partnership.

Delivering a new reality to datacenters and near edge compute solutions

NeuReality’s agreement with IBM includes cooperation around NR1, NeuReality’s first Server-on-a-Chip ASIC implementation of its AI-centric architecture. The NR1 is a high performance, fully linear, scalable, network-attached device that provides services of AI workload processing, NeuReality says. In simpler terms, the NR1 offering targets cloud and enterprise datacenters, alongside carriers, telecom operators, and other near edge compute solutions—enabling them to deploy AI use cases more efficiently. The NR1 is based on NeuReality’s first generation FPGA-based NR1-P prototype platform introduced earlier this year.

In line with NeuReality’s vision to make AI accessible to all, this technology will remove the system bottlenecks of today’s solutions and provide disruptive cost and power consumption benefits for inference systems and services, the company said. The collaboration with IBM will ensure Neurality’s already available FPGA-based NR1-P platform supports software integration and system level validation prior to the availability of the NR1 production platform next year, the companies said.

“Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 Server-on-a-Chip’s tapeout. Being able to develop, test and optimize complex datacenter distributed features, such as Kubernetes, networking, and security before production is the only way to deliver high quality to our customers. I am extremely proud of our engineering team who will deliver a new reality to datacenters and near edge solutions. This new reality will allow many new sectors to deploy AI use cases more efficiently than ever before,” Tanach added.

A marker of NeuReality’s continued momentum

According to Dr. Mukesh Khare, Vice President of Hybrid Cloud research at IBM Research, “In light of IBM’s vision to deliver the most advanced Hybrid Cloud and AI systems and services to our clients, teaming up with NeuReality, which brings a disruptive AI-centric approach to the table, is the type of industry collaboration we are looking for. The partnership with NeuReality is expected to drive a more streamlined and accessible AI infrastructure, which has the potential to enhance people’s lives.”

As part of the agreement, IBM becomes a design partner of NeuReality and will work on the product requirements for the NR1 chip, system, and SDK that will be implemented in the next revision of the architecture. Together the two companies will evaluate NeuReality’s products for use in IBM’s Hybrid Cloud, including AI use cases, system flows, virtualization, networking, security, and more.

Following NeuReality’s announcement of  its first-of-a-kind AI-centric architecture back in February and its collaboration with Xilinx to deliver their new AI-centric FPGA-based NR1-P platforms to the market  in September, this agreement with IBM marks the company’s upward trajectory and continued momentum.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Microsoft and Nvidia team up to train one of the world’s largest language models

Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains 530 billion parameters and achieves “unmatched” accuracy in a broad set of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language inferences.

“The quality and results that we have obtained today are a big step forward in the journey towards unlocking the full promise of AI in natural language. The innovations of DeepSpeed and Megatron-LM will benefit existing and future AI model development and make large AI models cheaper and faster to train,” Nvidia’s senior director of product management and marketing for accelerated computing, Paresh Kharya, and group program manager for the Microsoft Turing team, Ali Alvi wrote in a blog post. “We look forward to how MT-NLG will shape tomorrow’s products and motivate the community to push the boundaries of natural language processing (NLP) even further. The journey is long and far from complete, but we are excited by what is possible and what lies ahead.”

Training massive language models

In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. Language models with large numbers of parameters, more data, and more training time have been shown to acquire a richer, more nuanced understanding of language, for example gaining the ability to summarize books and even complete programming code.

Microsoft Nvidia MT-NLP

To train MT-NLG, Microsoft and Nvidia say that they created a training dataset with 270 billion tokens from English-language websites. Tokens, a way of separating pieces of text into smaller units in natural language, can either be words, characters, or parts of words. Like all AI models, MT-NLP had to “train” by ingesting a set of examples to learn patterns among data points, like grammatical and syntactical rules.

The dataset largely came from The Pile, an 835GB collection of 22 smaller datasets created by the open source AI research effort EleutherAI. The Pile spans academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more, which Microsoft and Nvidia say they curated and combined with filtered snapshots of the Common Crawl, a large collection of webpages including news stories and social media posts.

Microsoft Nvidia MT-NLP

Above: The data used to train MT-NLP.

Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs.

When benchmarked, Microsoft says that MT-NLP can infer basic mathematical operations even when the symbols are “badly obfuscated.” While not extremely accurate, the model seems to go beyond memorization for arithmetic and manages to complete tasks containing questions that prompt it for an answer, a major challenge in NLP.

It’s well-established that models like MT-NLP can amplify the biases in data on which they were trained, and indeed, Microsoft and Nvidia acknowledge that the model “picks up stereotypes and biases from the [training] data.” That’s likely because a portion of the dataset was sourced from communities with pervasive gender, race, physical, and religious prejudices, which curation can’t completely address.

In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claim that GPT-3 and similar models can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT,  XLNet, and Facebook’s RoBERTa.

Microsoft and Nvidia claim that they’re “committed to working on addressing [the] problem” and encourage “continued research to help in quantifying the bias of the model.” They also say that any use of Megatron-Turing in production “must ensure that proper measures are put in place to mitigate and minimize potential harm to users,” and follow tenets such as those outlined in Microsoft’s Responsible AI Principles.

“We live in a time [when] AI advancements are far outpacing Moore’s law. We continue to see more computation power being made available with newer generations of GPUs, interconnected at lightning speeds. At the same time, we continue to see hyper-scaling of AI models leading to better performance, with seemingly no end in sight,” Kharya and Alvi continued. “Marrying these two trends together are software innovations that push the boundaries of optimization and efficiency.”

The cost of large models

Projects like MT-NLP, AI21 Labs’ Jurassic-1, Huawei’s PanGu-Alpha, Naver’s HyperCLOVA, and the Beijing Academy of Artificial Intelligence’s Wu Dao 2.0 are impressive from an academic standpoint, but building them doesn’t come cheap. For example, the training dataset for OpenAI’s GPT-3 — one of the world’s largest language models — was 45 terabytes in size, enough to fill 90 500GB hard drives.

AI training costs dropped 100-fold between 2017 and 2019, according to one source, but the totals still exceed the compute budgets of most startups. The inequity favors corporations with extraordinary access to resources at the expense of small-time entrepreneurs, cementing incumbent advantages.

For example, OpenAI’s GPT-3 required an estimated 3.1423^23 floating-point operations per second (FLOPS) of compute during training. In computer science, FLOPS is a measure of raw processing performance, typically used to compare different types of hardware. Assuming OpenAI reserved 28 teraflops — 28 trillion floating-point operations per second — of compute across a bank of Nvidia V100 GPUs, a common GPU available through cloud services, it’d take $4.6 million for a single training run. One Nvidia RTX 8000 GPU with 15 teraflops of compute would be substantially cheaper — but it’d take 665 years to finish the training.

Microsoft and Nvidia says that it observed between 113 to 126 teraflops per second per GPU while training MT-NLP. The cost is likely to have been in the millions of dollars.

A Synced report estimated that a fake news detection model developed by researchers at the University of Washington cost $25,000 to train, and Google spent around $6,912 to train a language model called BERT that it used to improve the quality of Google Search results. Storage costs also quickly mount when dealing with datasets at the terabyte — or petabyte — scale. To take an extreme example, one of the datasets accumulated by Tesla’s self-driving team — 1.5 petabytes of video footage — would cost over $67,500 to store in Azure for three months, according to CrowdStorage.

The effects of AI and machine learning model training on the environment have also been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly five times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like Codex require significant amounts of compute — on the order of hundreds of petaflops per day — which contributes to carbon emissions.

In a sliver of good news, the cost for FLOPS and basic machine learning operations has been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months. Other recent research suggests that large language models aren’t always more complex than smaller models, depending on the techniques used to train them.

Maria Antoniak, a natural language processing researcher and data scientist at Cornell University, says when it comes to natural language, it’s an open question whether larger models are the right approach. While some of the best benchmark performance scores today come from large datasets and models, the payoff from dumping enormous amounts of data into models is uncertain.

“The current structure of the field is task-focused, where the community gathers together to try to solve specific problems on specific datasets,” Antoniak told VentureBeat in a previous interview. “These tasks are usually very structured and can have their own weaknesses, so while they help our field move forward in some ways, they can also constrain us. Large models perform well on these tasks, but whether these tasks can ultimately lead us to any true language understanding is up for debate.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
AI

Meow Wolf, Anthos team for multi-cloud app management in art shows

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Meow Wolf’s work with SADA, a Google Cloud Premier Partner with multiple specializations, and its use of Anthos multi-cloud app management were featured in a spotlight session on immersive art experiences at the Google Cloud Next ’21 conference held online through October 14.

Meow Wolf is an American arts and entertainment company that creates large-scale immersive art installations and produces streaming content, music videos, and arts and music festivals. SADA is a cloud-computing consultant based in North Hollywood, California. Google Anthos is a next-gen, hybrid- and multi-cloud application management platform that aims to provide a consistent development and operations experience for cloud and on-premises environments.

Scalable, flexible multi-cloud app management

Known more for its work with enterprise clients, SADA is helping Meow Wolf design and apply solutions for its permanent multimedia installations, such as Omega Mart, now open in Las Vegas. Anthos fit Meow Wolf’s requirements for a modern cloud application that could be deployed on-premises to ensure low latency and fault tolerance.

The complex, always-on nature of Omega Mart required the scalable IT infrastructure Anthos offers. Anthos allows apps to run unmodified on existing on-premises hardware and many public clouds in simple, flexible, and secure ways.

“Anthos has helped us create a groundbreaking experience that immerses guests in a way that’s never been done before,” said Jordan Snyder, vice president of platform at Meow Wolf. “It gives us a ‘single pane of glass’ to monitor, maintain, and quickly push out app updates.”

Omega Mart, an interactive “supermarket,” is Meow Wolf’s second permanent art exhibition leveraging the hybrid cloud platform to run sensory installations.

Omega Mart, which opened in February, is an art installation billed as the world’s most surreal supermarket and sensory playground, featuring otherworldly displays, hidden portals, immersive art experiences, and shelves stocked with peculiar products. With live, interactive displays that can be accessed via RFID-powered Boop Card readers, shoppers become part of the experience.

“It’s exciting to know that technology like Anthos can be applied to bring artistic visions to life in new and creative ways,” said Miles Ward, CTO at SADA. “Omega Mart is one of many amazing ways to apply Anthos technology.”

“SADA has been instrumental to this process, from helping us conceive the technical solutions to tackling various hurdles along the way,” Snyder said.  SADA’s consultants worked with Anthos and helped Meow Wolf design and apply solutions to meet Omega Mart’s needs. He added, “Their guidance, expertise, and support helped make the launch of Omega Mart a huge success.”

Anthos is now used to host Meow Wolf’s applications and various installations that capture customer interactions with the various Boop and computer stations, which facilitate the interactive gameplay element of the experience and drive the exhibit’s story. Since the opening of the exhibit, SADA has continued to provide technical account management and support.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Battlefield 2042’s Hazard Zone mode is about collecting intel with your team

As is the way of things when it comes to revealing a major game’s features these days, EA has been drip-feeding info about over the last several months. To wit, it has only just pulled back the curtain on Hazard Zone, one of the game’s three main modes, a month before the .

is about getting into the arena, retrieving data drives and escaping via an extraction point before a storm overwhelms you or enemies take you out. Only two teams can make it out, as only a couple of extraction windows will pop up at random locations (though only one player needs to get out for their team to win). Matches run for up to 20 minutes and will take place across all seven of Battlefield 2042‘s maps.

Survival is key here. You only have one life, but one of your three teammates can resurrect you if you’re killed. Once your entire team is wiped out, it’s game over. Still, if you’re sneaky enough, you can win a match without firing a shot. Some satellites will already be on the ground at the start of a game, and more will drop in as the round progresses, so you’ll need to adjust your strategy as you go.

Before the start of a round, you and your teammates can kit yourselves out with gadgets. Players can use money earned in previous matches (primarily by making it out with data drives) to buy gear like a scanner that shows data drive locations, a healing upgrade and a Squad Redeploy Call-in. The latter lets you revive dead squad mates; otherwise, you’ll need to find a Redeploy Uplink somewhere on the map to bring back your buddies.

All of the XP you earn will go toward your overall Battlefield 2042 progression, which will boost your player level and unlock weapons. Teams are made up of unique characters — players will need to find specialists and loadouts that work in harmony to increase their chances of success.

Hazard Zone isn’t quite a battle royale mode, since you don’t need to be the last squad standing to win. Instead, it’s objective-based and actually sounds a little like the main mode of Ubisoft’s recently announced (and delayed) . As with the other Battlefield 2042 modes, Hazard Zone supports 64 players on Xbox One and PlayStation 4. On PS5, Xbox Series X/S and PC, up to 128 players will square off on larger maps.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

How Envision Virgin Racing team uses data science to hone performance

All the sessions from Transform 2021 are available on-demand now. Watch now.


Effective use of data science can help business leaders improve their decision-making processes. In the high-speed world of motorsport, those decisions have race-changing implications.

That’s certainly the case for Sylvain Filippi, managing director and CTO of the Envision Virgin Racing team, one of the leading teams in Formula E — a single-seater motorsport championship that only uses electric cars. His team produces huge amounts of data, but needs to use this information effectively to produce a competitive advantage.

To give his team every chance of success, Filippi’s team started working with global professional services firm Genpact two years ago. Envision Virgin Racing uses Genpact’s data science skills to hone performance on race day. Filippi explained to VentureBeat how the relationship works and the advantages it provides to his team.

VentureBeat: Why is data crucial on race days?

Sylvain Filippi: That’s everything to do with the race format in Formula E. Formula 1 races are spread out over three or four days; they do free practice on Thursday and Friday, qualifying on Saturday, and a race on Sunday. They have loads of time to look at the data and analyze it. In Formula E, we do all of these elements in one day: at 7:30 a.m., we start free practice one, and then free practice two at around 10 a.m., qualifying at midday, and at 4 p.m., we race. You basically have an hour or less in-between sessions to download all the data, to look at it, gather all the insights, and then make decisions on what you’re changing and what you’re modifying for the next session. So it’s a huge engineering challenge, purely because of the race format.

VentureBeat: What does this data challenge mean in terms of your business?

Filippi: You have to be super good at IT – we have a top IT infrastructure team, given the size of our business. You also need talented software engineers and data engineers because you need to be really efficient at downloading data, analyzing it, and structuring it. And then, the engineers need to know what they are looking for in terms of getting the right insights and making the right changes. Those changes are twofold – making any changes on the car like in any other racing formula, but also all the data that is related to the energy side of Formula E. There’s a gigantic amount of work to cover in less than an hour between sessions.

VentureBeat: How are you working with Genpact to help you deal with this data challenge?

Filippi: Genpact is helping us by using its expert capabilities in data science. They’re starting to play with our data, which according to them is pretty good because there’s a lot of detail and it’s really well structured, which is not the case in many companies. I’m pretty certain that they’re going to be able to find trends and correlations and links between all sorts of random stuff — from weather to temperature — and onto the tires. They’ll be able to crunch our entire set of data and find some trends that we haven’t seen and we don’t know about because it’s physically impossible for the human brain.

VentureBeat: Why is the application of data science so important to your team?

Filippi: This work is exciting because it’s never been done in motorsport before really. It’s very, very early days in terms of how to gain performance through AI and machine learning – and that’s a fun exercise. We started working with Genpact about two years ago. And we started from scratch. There was no software or a platform; nothing like that existed. So, they located some super clever data scientists and started looking at our data, and they’re starting to come up with insights and models.

VentureBeat: Where has the relationship with Genpact produced dividends?

Filippi: You have the pure performance side of the car, which is changing race by race, which is about asking “how do you make the car go faster?” There’s also the whole strategy side. So we don’t have pit stops at the moment in Formula E, even though they could come back in the future. But, we do have Attack Mode, which allows teams to temporarily raise the power output of a Formula E car. Using that boost at the right time can have race-changing implications, so when do you use Attack Mode? There’s many decisions that need to be taken in a 45-minute race around Attack Mode and what we do in any sporting situation. Data is crucial because we’re a relatively limited team by motorsport regulations, and you’re only allowed 17 operational people at the track – and that’s not many people compared to Formula 1.

VentureBeat: Where else are you using data science on race day?

Filippi: In Formula E races, instead of a number of labs, like in Formula One, there are timed races – so 45 minutes plus one lap. Managing that sounds easy, but it’s a gigantic challenge because when you start the race, you have a certain amount of energy in your battery. If you don’t know the distance you’re going to be covering, then you don’t know how much energy you can use per lap. If you misjudge it, you’ll run out of energy at the end of the race. Or on the other side, you’ll finish with too much energy, which means you could have gone faster. Genpact have worked with us to develop a piece of software that helps us evaluate that energy relationship dynamically throughout the race so we can accurately estimate the exact distance that we’re going to cover. And that’s hugely complex because, by definition, what happens in racing is an ever-changing scenario. No one can predict a race –  there’s always the impact of a safety car or rain or something else.

VentureBeat: How are you using analytics to engage the fan base?

Filippi: Fans these days, especially the younger generation, are really data-hungry – they want to understand how the sport works and why decisions are made. The strategy aspect of it, and why you make a decision, is super important. So we’re working with Genpact on how we can give even more to the fans because it makes the sport more engaging and it makes them excited. But that’s a work in progress. It’s really important – we’re a new sport. This is our seventh season, and we’ve grown at a rapid pace. Now, we are a major motorsport platform and we need to keep going, but also we need to reaffirm who we are and our values.

VentureBeat: What is your best-practice tip for making the most of data?

Filippi: Genpact has taught us that it’s really important to be very organized – structured data is everything. If you have a lot of data, but it’s completely unstructured – and you can’t access it because it’s in documents where you can’t extract the data – then it’s pretty useless. So you’ve got to really think about the outcome and the insights you want to get to, and then work backwards to what your data should look like. Never underestimate the value of your data, which in most companies around the world is much more valuable than people would think.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

‘Lost Judgment’ will let you team up with a dog detective

Lost Judgment, the sequel to detective adventure Judgment, arrives in a couple of months and Sony has given a deeper look at what to expect with a gameplay trailer. For one thing, there are a ton of mini-games for you to check out, including a Sonic the Hedgehog one.

You’ll be able to tail and chase suspects, once again adopt disguises, harness Takayuki Yagami’s parkour skills and use a bevy of gadgets. Perhaps most excitingly, you’ll have a companion dog who can help you find targets and assist in fights.

When it comes to combat, you can draw from a variety of martial arts forms, including the new counterattack-centric snake style. The trailer also shows off more of Yagami’s side-quests while he’s undercover as a high school advisor. You’ll be able to build and control a robot in the robotics club, for instance.

It’s not exactly surprising that there’ll be so much to see and do here, given the depth of the original and developer Ryu ga Gotoku Studio’s history with the Yakuza series. The trailer gives just a taste of what you’ll be able to do in Lost Judgment, which arrives on PlayStation 4, PS5, Xbox One and Xbox Series X/S on September 24th.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link