Categories
Game

Halo Infinite dev ‘looking at’ battle pass progression after player feedback

Halo Infinite‘s multiplayer launched into beta earlier this week, and while the game itself has seemingly been well-received by players, there are some aspects that players take issue with. Perhaps the biggest is battle pass progression, something that’s been a contentious topic for Halo fans since well before Infinite‘s launch. Now, after Halo fans made their voices heard, 343 Industries says that it’s looking into complaints about the battle pass.

What’s the issue with Halo Infinite’s battle pass?

In breaking with past games in the series, Halo Infinite is being offered as a standalone, free-to-play release for the first time. Obviously, since it’s free-to-play, that means Microsoft needs to make money in ways other than charging upfront. Like many other free-to-play shooters, Microsoft has opted to offer a seasonal battle pass to players alongside a cash shop where people can use premium currency to purchase skins and other cosmetic items.

As I noted yesterday in my first impressions article, Halo Infinite feels a lot like Fortnite from the perspective of monetization. While Infinite‘s cash shop is charging ridiculous prices for its cosmetics (something that we hope to see changed in the future), there’s a different problem with the battle pass: It’s no fun to level up because progress is slow.

While most games usually give battle pass progress just for playing matches, such a mechanism doesn’t exist in Halo Infinite. Instead, the only way to earn progress toward battle pass levels is by completing challenges, and this is a change that’s been contested by fans since it was announced. While initially it seemed that 343 would be sticking to its guns for Halo Infinite season 1, now the studio is suggesting that changes could come sooner.

A change may be coming

On Twitter, 343 community director Brian Jarrard referenced these battle pass criticisms directly, saying that the team is considering data gathered from early matches played during the beta. “Thank you to everyone who has jumped into the #HaloInfinite beta so far! FYI the team is looking at Battle Pass progression and gathering data from yesterday’s sessions and we’ll share updates as we have them,” Jarrard said. “Please continue to share feedback and raise flags as you see them.”

While that’s not confirmation that anything will be changing, it is confirmation that 343 is listening to the complaints. As I said in SlashGear’s first impressions post, battle pass progression feels needlessly slow at the moment, and it isn’t enjoyable to pay money for the premium battle pass only to have it progress at a snail’s pace.

The hope is that 343 will tweak it in such a way that players start getting battle pass XP for each game they play. Of course, 343 could come up with a different solution to these battle pass progression issues, or it may simply opt to keep things the way they are. We’ll update you when we hear more from the studio, so stay tuned for that.

Repost: Original Source and Author Link

Categories
AI

Linux Foundation to promote dataset sharing and software dev techniques

At the Linux Foundation Membership Summit this week, the Linux Foundation — the nonprofit tech consortium founded in 2000 to standardize Linux and support its growth — announced new projects: Project OpenBytes and the NextArch Foundation. OpenBytes is an “open data community” as well as a new data standard and format primarily for AI applications, while NextArch — which is spearheaded by Tencent — is dedicated to building software development architectures that support a range of environments.

OpenBytes

Dataset holders are often reluctant to share their datasets publicly due to a lack of knowledge about licenses. In a recent Princeton study, the coauthors found that the opaqueness around licensing — as well as the creation of derivative datasets and AI models — can introduce serious ethical issues, particularly in the computer vision domain.

OpenBytes is a multi-organization effort charged with creating an open data standard, structure, and format with the goal of reducing data contributors’ liability risks, under the governance of the Linux Foundation. The format of data published, shared, and exchanged will be available on the project’s future platform, ostensibly helping data scientists find the data they need and making collaboration easier.

Linux Foundation senior VP Mike Dolan believes that if data contributors understand that their ownership of data is well-protected and that their data won’t be misused, more data will become accessible. He also thinks that initiatives like OpenBytes could save a large amount of capital and labor resources on repetitive data collection tasks. According to a CrowdFlower survey, data scientists spend 60% of their time cleaning and organizing data and 19% of their time actually collecting datasets.

“The OpenBytes project and community will benefit all AI developers, both academic and professional and at both large and small enterprises, by enabling access to more high-quality open datasets and making AI deployment faster and easier,” Dolan said in a statement.

Autonomous car company Motional (a joint venture of Hyundai and Aptiv), Predibase, Zilliz, Jina AI, and ElectrifAi are among the early members of OpenBytes.

NextArch

As for NextArch, it’s meant to serve as a “neutral home” for open source developers and contributors to build an architecture that can support compatibility between microservices. “Microservices” refers to a type of architecture that enables the rapid, frequent, and reliable delivery of large and complex apps.

Cloud-native computing, AI, the internet of things (IoT), and edge computing have spurred enterprise growth and digital investment. According to market research, the digital transformation market was valued at $336.14 billion in 2020 and is expected to grow at a compound annual growth rate of 23.6% from 2021 to 2028. But a lack of common architecture is preventing developers from fully realizing these technologies’ promises, Linux Foundation executive director Jim Zemlin asserts.

“Developers today have to make what feel like impossible decisions among different technical infrastructures and the proper tool for a variety of problems,” Zemlin said in a press release. “Every tool brings learning costs and complexities that developers don’t have the time to navigate, yet there’s the expectation that they keep up with accelerated development and innovation.”

Enterprises generally see great value in emerging technologies and next-generation platforms and customer channels. Deloitte reports that the implementation of digital technologies can help accelerate progress towards organizational goals such as financial returns, workforce diversity, and environmental targets by up to 22%. But existing blockers often prevent companies from fully realizing these benefits. According to a Tech Pro survey, buy-in from management and users, training employees on new technology, defining policies and procedures for governance, and ensuring that the right IT skillsets are onboard to support digital technologies remain challenges for digital transformation implementation.

Toward this end, NextArch aims to improve data storage, heterogeneous hardware, engineering productivity, telecommunications, and more through “infrastructure abstraction solutions,” specifically new frameworks, designs, and methods. The project will seek to automate operations and processes to “increase the autonomy of [software] teams” and create tools for enterprises that address the problems of productization and commercialization in digital transformation.

“NextArch … understands that solving the biggest technology challenges of our time requires building an open source ecosystem and fostering collaboration,” Dolan said in a statement. “This is an important effort with a big mission, and it can only be done in the open source community. We are happy to support this community and help build open governance practices that benefit developers throughout its ecosystem.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Pinecone CEO on bringing vector similarity search to dev teams

All the sessions from Transform 2021 are available on-demand now. Watch now.


The traditional way for a database to answer a query is with a list of rows that fit the criteria. If there’s any sorting, it’s done by one field at a time. Vector similarity search looks for matches by comparing the likeness of objects, as captured by machine learning models. Pinecone.io brings “vector similarity” to the average developer by offering turnkey service.

Vector similarity search is particularly useful with real-world data because that data is often unstructured and contains similar yet not identical items. It doesn’t require an exact match because the so-called closest value is often good enough. Companies use it for things like semantic search, image search, and recommender systems.

Success often depends upon the quality of the algorithm used to turn the raw data into a succinct vector embedding that effectively captures the likeness of objects in a dataset. This process must be tuned to the problem at hand and the nature of the data. An image search application, for instance, could use a simple model that turns each image into a vector filled with numbers representing the average color in each part of the image. Deep learning models that do something much more elaborate than that are very easy to get nowadays, even from deep learning frameworks themselves.

We sat down with Edo Liberty, the CEO and one of the founders of Pinecone, and Greg Kogan, the VP of marketing, to talk about how they’re turning this mathematical approach into a Pinecone vector database that a development team can deploy with just a few clicks.

VentureBeat: Pinecone specializes in finding vector similarities. There have always been ways to chain together lots of WHERE clauses in SQL to search through multiple columns. Why isn’t that good enough? What motivated Pinecone to build out the vector distance functions and find the best? 

Edo Liberty: Vectors are by no means new things. They have been a staple of large-scale machine learning and a part of machine learning-driven services for at least a decade now in larger companies. It’s been kind of “table stakes” for the bigger companies for at least a decade now. My first startup was based on technologies like this. Then, we used it at Yahoo. Then, we built another database that deployed it.

It’s a big part of image recognition algorithms and recommendation engines, but it really didn’t hit the mainstream until machine learning. In pretrained models, AI scientists started generating these embeddings in vector representations of complex objects pretty much for everything. So it just became a lot lower and became a lot more common. People suddenly started having these vectors and suddenly, it’s like they are asking “OK, what now?”

Greg Kogan: The reason why clauses fall short is that they are only as useful as the number of facets that you have. You can string together WHERE clauses, but it won’t produce a ranked answer. Even for something as common as semantic search, once you can get a vector embedding of your text document, you can measure the similarity between documents much better than if you’re stringing together words and just looking for keywords within the document. Other things we’re hearing is search for other unstructured data types like images or audio files. Things like that where there was no semantic search before. But now, they can convert unstructured data into vector embeddings. Now you can do vector similarity search on those items and do things like find similar images or find similar products. If you do it on user behavior data or event logs, you can find similar events, similar shoppers, and so on.

‘Once it’s a vector, it’s all the same to us’

VentureBeat: What kind of preprocessing do you need to do to get to the point where you’ve got the vector? I can imagine what it might be for text, but what about other domains like images or audio? 

Kogan: Once it’s a vector, it’s all the same to us. We can perform the same mathematical operations on it. From the user’s point of view, they would need to find an embedding model that works with their type of data. So for images, there are many computer vision models available off the shelf. And if you’re a larger company with your own data science team, you’re most likely developing your own models that will transform images into vector embeddings. It’s the same thing for audio. There’s wav2vec for audio, for instance.

For text and images, you can find loads of off-the-shelf models. For audio and streaming data, they’re hard to find so it does take some data science work. So the companies that have the most pressing need for this are those more advanced companies that have their own data science teams. They’ve done all the data science work and they see that there’s a lot more they can do with those vectors.

VentureBeat: Are any of the models more attractive, or does it really involve a lot of domain-specific kind of work?

Kogan: The off-the-shelf models are good enough for a lot of use cases. If you’re using basic semantic search over documents, you can find some off-the-shelf models, like sentence embeddings and things like that. They are fine. If your whole business depends on some proprietary model, you may have to do it on your own. Like if you’re a real estate startup or financial services startup and your whole secret sauce is being able to model something like financial risk or the price of a house, you’re going to invest in developing your own models. You could take some off-the-shelf model and retrain it on your own data to eke out some better performance from it.

Large banks of questions generate better results

VentureBeat: Are there examples of companies that have done something that really surprised you, that built a model that turned out to be much better than you thought it would even end up?

Liberty: If you have a very large bank of questions and good answers to those questions, a common and reasonable approach is to look for what is the most similar question and just return the best answer that you have for this other question, right? It sounds very simplistic, but it actually does a really good job, especially if you have a large bank of questions and answers. The larger the collection, the better the results

Kogan: We didn’t even realize it could be applicable for bot detection and image duplication. So if you’re a consumer company that allows uploading of images, you may have a bot problem where a user uploads some bad images. But once that image is banned, they try to upload a slightly tweaked version of that image. Simply looking up a hash of that image is not going to find you a match. But if you look for similarity, like closely similar images, you suspend that account immediately or at least flag it for review.

We’ve also heard this for financial services organizations, where they get way more applications than they can manually review. So they want to flag applications that resemble previously flagged fraudulent applications.

VentureBeat: Is your technology proprietary? Did you build this on some kind of open source code? Or is it some mixture?

Kogan: At the core of Pinecone is a vector search library that is a proprietary index. A vector index. We find that people don’t care so much about exactly which index it is or whether it’s proprietary or open source. They just want to add this capability to their application. How can I do that quickly and how can I scale it up? Does it have all the features we need? Does it maintain its speed and accuracy at scale? And who manages the infrastructure?

Liberty: We do want to contribute to the open source community. And we’re thinking about our open core strategy. It’s not unlikely that we will support open source indexes publicly soon. What Greg said is accurate. I’m just saying that we are big fans of the open source community and we would love to be able to contribute to it as well.

VentureBeat: Now it seems that if you’re a developer that you don’t necessarily integrate it with any of the databases per se. You just kind of side-load the data into Pinecone. When you query, it returns some kind of key and you go back to the traditional database to figure out what that key means. 

Kogan: Exactly right. Yes, you’re running it alongside your warehouse or data lake. Or you might be storing the main data anywhere. Soon we’ll actually be able to store more than just the key in Pinecone. We’re not trying to be your source of truth for your user database or your warehouse. We just want to eliminate the round trips. Once you find your ranked results or similar items, then we’ll have a bit more there. If all you want is the S3 location of that item or the user ID, you’ve got it in your results.

More flexibility on pricing

VentureBeat: On pricing, it looks like you just load everything into RAM. Your prices are determined by how many vectors you have in the dataset. 

Kogan: We used to have it that way. We recently started letting some users have a little bit more control over things like the number of shards and replicas. Especially if they want to increase their throughput. Some companies come to us with insanely high throughput demands and latency demands. When they sign up and they create an index, they can choose to have more shards and more replicas for higher availability and throughput. In that case, you still have the same amount of data, but because it’s being replicated, you’re going to pay more because you’re looking for data on more machines.

VentureBeat: How do you handle the jobs where companies are willing to wait a little bit and don’t care about a cold start? 

Kogan: For some companies, the memory-based pricing doesn’t make sense. So we’re happy to work with companies to find another model.

Liberty: What you’re asking about is a lot more fine-grained control over costs and performance. We do work with larger customers and larger teams. We just sat down with a very large company today. The workload is 50 billion vectors. Usually, we have a very tight response time. Let’s say 20, 30, 40, 50 milliseconds is typical 99% of the time. But they say that this is an analytical workload and we are happy to have a full second latency or even two seconds. That means they can pay less. We are very happy to work with customers and find trade-offs, but it’s not something that’s open in the API today. If you sign in on the website and use the product, you won’t have those options available to you yet.

Kogan: We simplified the self-serve pricing on the website to make it easier for people to just jump in and play around with it. But once you have 50 billion vectors and crazy performance or scale requirements, come talk to us. We can make it work.

Our initial bet was that more and more companies would use vector data as machine learning models become more prevalent and the data scientists become more productive. They realize that you can do a lot more with your data, once it’s going to a vector format. You can collect less of it and still succeed. There are privacy and consumer protection implications as well.

It’s becoming less and less extreme of a bet. We are seeing the early adopters, the most advanced companies have already done this. They’re using vector similarity search and using recommendation systems for their search results. Facebook uses them for their feed ranking. The vision is that more companies will leverage vector data for recommendation and many use cases still to be discovered.

Liberty: The leaders already have it. It’s already happening. It’s more than just a trend.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Our Life Dev Talks the Unconventional Joy of Visual Novels

The visual novel genre is a space within video games that provides indie developers a lot of freedom and opportunity to create stories that they would like to see, especially given that the genre focuses a heavy emphasis on text-based gameplay. Originating in Japan, visual novels have quite a massive audience internationally, with games ranging from romance-driven stories to those with a horror approach.

GB Patch Games is an indie game development company that was founded in 2015 and solely focuses on the development of visual novel games. With a number of titles in its library already, the studio’s current focus is on its Our Life series. The first title, Our Life: Beginnings and Always, was released in November 2020 while the next entry, Our Life: Now & Forever, is currently in development. Our Life: Beginnings and Always follows the story of two childhood friends, their families, and four significant summers that range from childhood to the young adult years

In speaking to Katelyn Clark, founder of GB Patch Games, I learned a little more about the developer’s focus on visual novels, their approach to storytelling, and some of the unique elements found in the Our Life series.

GB Patch Games was founded back in 2015. With a focus on developing visual novels, you have a pretty sizable library of games! What was the initial draw towards the visual novel genre as developers?

Growing up I loved playing video games. It was a lot of fun to interact with them and move the story along myself through gameplay. But what ended up amazing me more than anything else was when I happened to try games that allowed you to interact with the story/character side of the experience. RPGs with secret endings, the Harvest Moon series where you got to pick which person the main character married (if anyone), etc. The realization that in a game where you had some level of control over what happened, things didn’t have to go only one way always stuck with me.

Though for most of my childhood, I always thought normal gameplay came first and that’s all there was. I didn’t know about text adventures or genres like that until I was older. Visual novels were the first story-centered type of game I stumbled on as a young teen. And it was very exciting to find games where the gameplay was all about interacting with a story. I was immediately a big fan and have been ever since.

In terms of the visual novel genre, are there any specific elements to the genre that stand out to you? Was there any particular gap you noticed in the genre that you thought you could fill with your games?

I enjoy stories where not only is anything possible, but where there can be more than one thing possible. Visual novels don’t have to follow any conventions, though certain genres do have some expected elements, and can include as many options/branches as the creator wants. It’s a lot of fun.

I didn’t think I was doing anything particularly needed when I got into making my own, though. I just wanted to create games I liked, try whatever ideas I had even if they weren’t consistent with each other, and hoped other people would have a good time with them too.

With such a wide range of games within the genre, what about the genre has contributed to the stories that you’re looking to tell through your games?

Visual novels make it easy to try out the different interaction concepts I’ve had an interest in. From point management to timed options, to choices that are purely for roleplaying/customization, and always getting to choose who the main character has a preference for (if anyone).

In playing through Our Life: Beginning and Always, I noticed that the ability to really customize the player’s character is a huge part of the game. What is the significance for the GB Patch team in terms of providing such inclusive customization options for players?

When coming up with the initial ideas for Our Life, player customization wasn’t something terribly present. I was thinking about how I wanted to create a story where the characters grew up over the years, how it’d be fun to have your choices influence the way the love interest developed, that I wanted to attempt an atmosphere that was genuinely relaxing/nostalgic to people, and there was a goal of really showing off how charming the childhood friend trope could be if it was given complete focus.

A look at the character customization screen in Our Life: Beginnings & Always.

While going through that list, difficult decisions had to be made. With so many elements and such a long period of time to go over, it was stressful to try coming up with one single, perfect way to do it that would give players of all different personalities a positive experience. My solution was to default to ‘well, maybe we could make it a choice instead of a set thing’.

After repeated rounds of that, it eventually clicked that if I’m imagining players having this ideal little summer story of growing up without worries of how it’ll play out, that the best way to do that would be to just let them live how they wanted to. It was pretty obvious in retrospect. At that point, we began work on coming up with systems and structures that would allow as much personalization as possible so that as many people as possible felt comfortable and included.

Was creating extremely customizable experiences a big focus for GB Patch from the beginning?

It wasn’t. Originally, I wanted the games we made to be a story you had a say in, but still an intentional/set experience. I thought us keeping a good amount of control was the best way to ensure there was a consistent tone and the players would feel the ways we hoped they would and such. It wasn’t until Our Life: Beginnings & Always that all the good things major customization could do got through to me. I imagine quite a lot of our projects will use heavy personalization from here on out.

Specifically looking at Our Life: Beginning and Always, the game definitely has so many choices to go through. With over 300,000 words in the game, there’s a lot of story to read through. What has been the general approach to creating both the base game and the DLC associated with it in terms of writing and mapping out the customizable choices?

It’s definitely freeform. I take each part of the game (Step openings, Moments, and Step closings) one at a time. Almost nothing is strict or unchangeable or decided on in-advanced. Often there aren’t even pre-planned premises for any of the Moments. The events only start to truly exist when I’m actually working on a single new Moment or opening or ending.

Some of the characters of Our Life: Beginnings & Always talk on screen.

There were overarching themes and certain types of scenes I knew I wanted to include, but I didn’t know for sure how anything was going to fit or play out until I worked through each section. Then once new content was complete, I’d go back and add references or alterations to prior scenes where needed. I like being able to give myself as much time as possible before deciding on the content and being free to go with whatever makes sense when I do reach the point where a choice has to be made.

What has been the most rewarding part of creating visual novels? Is it the final game and story, the community that has sprung up around your games, the team that you work with?

Our Life: Beginnings & Always resonated with people more than I ever thought it would. We tried our best, though I honestly thought the game would come out to be an ‘enjoyable time’ and that’d be satisfying enough. I didn’t have confidence that it’d succeed in having a deeper emotional impact. But it did for a lot of the players who gave it a chance.

Seeing how the story and characters made people happy, helped them feel calmer in a really stressful time, let them reflect on their real lives, allowed them to live out things they couldn’t in their real life, and all of that has definitely been the most rewarding part of everything I’ve done with visual novels.

Interview responses have been lightly edited for clarity.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

GDC Dev Survey Shows Full Impact of the Pandemic on Gaming

It probably comes as no surprise that the pandemic had a major impact on the video game industry. That’s been especially apparent in 2021 as major game delays seem to happen every week. Companies like Ubisoft have completely shifted their release schedule as studios adapt to work from home development.

That’s left gamers with plenty of anxiety about just how long we’ll be feeling the effects of the pandemic in the gaming industry. If games that were close to completion needed to be delayed a full year, what does that mean for games that were early in their development cycle?

We now have some clearer answers to those questions thanks to GDC’s 2021 State of the Game industry report. GDC surveyed over 3,000 developers this year, who shed some light on how the pandemic impacted their games. While the short-term effects have been grim, a move to remote work may be a net positive for gaming in the future.

The bad news

The immediate effects of the pandemic on gaming have been fairly obvious. With studios forced to suddenly shift to remote work on a dime, game development hit a snag back in March 2020. Games like The Last of Us Part 2 had their release dates shifted back a few months to deal with that shift, but the real scope of the blowback wasn’t felt until 2021. After all, games that were scheduled to launch in 2020 were already close to the finish line. It’s the games that weren’t quite as far along that could be impacted hardest.

GDC’s State of the Game survey echoed that result. In 2020’s poll, only 33% of responders said their game faced a pandemic-related delay. This year, that number shot up to 44%. While the majority of developers polled said their games hadn’t been delayed, that’s still a significant rise compared to last year.

There are plenty of challenges that game developers have faced in different phases of the process, from playtesting to prototyping new ideas. I spoke to GDC Content Marketing Lead Kris Graft about the survey’s findings, who gave a specific example of the obstacles developers have faced.

“Early on in the pandemic, we were talking to the Jackbox folks and they were talking about the way that they make games,” says Graft. “They’re stuff is so community-based and interactive with other issues. They had an issue with playtesting. I am an advocate for remote work, but I understand that you can’t devalue having face-to-face time.”

Logistical nightmares aren’t the only problems game developers faced in the last year and a half. Several developers cited that childcare created some headaches as well. Work-life balance between challenging to juggle as developers found themselves interrupted by kids who were stuck learning from home. Others lost access to some of their contractors as they had to pivot to taking care of their kids full time.

The good news

With a year and a half of challenges, one has to wonder if gaming will continue struggling with delays (both internal and external ones) beyond 2021. Fortunately, Kris Graft believes that we may be through the worst of it as studios begin to reopen and formally adopt hybrid workflows.

“If a game is in its early stages right now, I don’t think five years from now people are going to say it’s because of COVID-19 that it got pushed back,” Graft jokes. “As people have gotten used to the new processes, things start to move more smoothly. Once offices start opening up safely and people are able to start doing hybrid or in-office work, I don’t see why the past year and a half would continue to impact game release dates.”

If anything, the survey indicates that remote work could be a net positive for the industry in the long run. Developers largely felt that their productivity didn’t suffer while working from home. In fact, 35% said their productivity either somewhat or greatly increased, while 32% said it stayed about the same. While others found it more challenging, the results indicate that developers were more than capable of working outside of a studio.

The amount that developers worked during a week remained static as well compared to 2020’s polling. A majority of responders said they were working somewhere between 36 and 45 hours a week, which is a positive sign that developers were able to create boundaries in their work-life balance.

There’s one aspect of remote work that’s been especially positive for gaming. Studios are less restricted in who they can hire. Rather than just recruiting local talent who can come into an office, companies can expand their talent search since work can happen remotely. That means studios can make more diverse hires, as well as nab powerhouse talent that wouldn’t be willing to move for a job.

Despite the delays, Graft finds that the forced shift to work from home is ultimately good for the industry. The long-term effects may sound troubling in theory, but survey data indicates that remote work is able to help game developers, not hinder them.

“I do think that it’s good for this to be an option for companies to have now,” says Graft. “I don’t think remote working is ever going to replace all in-person stuff, but you can strike a balance and expand the talent pool. I think that’s really good for the game industry; getting more people in and not forcing them to move out to the Bay or something.”

The video game industry was forced to change in the past year and a half. While some of those were tough, companies shouldn’t throw out what’s worked for developers. The industry should come out of this strong, not weaker.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Game Dev Says AMD Super Resolution Took A Few Hours to Add

FidelityFX Super Resolution (FSR) is AMD’s open-source alternative to Nvidia deep learning super sampling (DLSS), and we’re starting to hear opinions from game developers on it. Although both tools achieve the same goal, Midgar Studio CEO and lead programmer of Edge of Eternity Jérémy Zeler-Maury says that FSR is much easier to implement into games and that both tools achieve similar image quality at 4K.

Edge of Eternity is a unique case study. The indie-developed JRPG includes DLSS and FSR, so the developer has experience working with both. In an interview with Wccftech, Zeler-Maury contrasted the “complex” process of integrating DLSS into the game with FSR, saying the latter only took “a few hours” to implement.

“Implementing DLSS was quite complex to integrate into Unity for a small studio like us; it required tweaking the engine and creating an external plugin to bridge Unity and DLSS,” Zeler-Maury said. “FSR, on the other hand, was very easy to implement, it only took me a few hours, requiring only simple data.”

The interview confirms a lot of what we already know about FSR. It’s much easier for developers to use, but Zeler-Maury cautions that it doesn’t work as well at lower resolutions. “For lower resolutions like upscaling to 1080p or 720p, I think DLSS gives a better result since it can reconstruct parts of missing details,” Zeler-Maury said.

DLSS uses machine learning to reconstruct an image, which makes it more equipped to maintain detail at low resolutions. FSR, according to Zeler-Maury, requires “the source data to be as sharp as possible.” As long as it is, the developer says that both tools provide amazing results, even going as far as to say they prefer FSR at 4K resolution because it produces fewer artifacts than DLSS.

In our FidelityFX Super Resolution review, we found that the upscaling technique is capable of delivering much higher frame rates at high resolutions, but the more aggressive upscaling modes give up too much visual quality. Zeler-Maury suggests that might not matter. They assert that the lion’s share of games will receive FSR support simply because it’s easier to implement.

“I think FSR being very easy to integrate means more games are going to get it, DLSS is more complicated to integrate. Aside from pre-integrated versions in engine or AAA games, I don’t think many small developers will integrate it,” Zeler-Maury said.

Still, DLSS isn’t down and out. Although it may not be able to reach as many games as FSR in the future, DLSS’ image reconstruction quality remains unmatched. FSR is providing credible competition with how easy it is to implement into games, however. A modder, for example, was able to patch FSR into Grand Theft Auto 5 before it released to the general public.

FSR supports a small list of games right now, including Anno 1800 and Godfall. AMD says that many more games will support the feature in the near future, including Far Cry 6, Resident Evil: Village, and Myst. 

Edge of Eternity is available now, with the FSR patch coming in the future.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Snowflake taps C3 AI to bring AI dev tools and apps to customers

Elevate your enterprise data technology and strategy at Transform 2021.


C3 AI and Snowflake are partnering to give Snowflake customers access to C3 AI’s development tools and enterprise applications, including AI-driven CRM, predictive maintenance, supply network optimization, and fraud detection apps, the companies announced.

Billed as a way to “deliver next-generation enterprise AI applications at scale,” the partnership will make C3 AI’s suite of Integrated Development Studio (IDS) tools — including C3 AI Data Studio, C3 AI ML Studio, C3 AI App Studio, C3 AI DevSecOps Studio, and C3 AI Marketplace — available to Snowflake users.

Broader access

Customers using Snowflake’s cloud-based data warehousing platform will also get access to C3 AI’s AI Suite of operational and security apps based on the C3 AI model-driven architecture. Services provided by those apps include data persistence, batch and stream processing, time-series normalization, auto-scaling, data encryption, attribute and role-based access control, and AI/ML services.

“The C3 AI Suite and C3 AI’s prebuilt enterprise-grade models significantly speed and simplify the development of enterprise AI applications. As our customers deploy enterprise AI applications at scale, integration with C3 AI to Snowpark will accelerate the development and deployment of complex AI and machine learning use cases,” Snowflake senior vice president Christian Kleinerman said in a statement.

Snowpark is Snowflake’s new developer UI that allows for the integration of DataFrame-style programming in coding with Scala for the Snowflake platform. The company introduced the new UI on Tuesday at the annual Snowflake Summit developers conference. Snowpark was designed to simplify writing complex data pipelines for Snowflake.

Synergy

C3 AI, headquartered in Redwood City, California, is chaired by Siebel Systems founder Thomas Siebel and develops enterprise software using AI techniques like machine learning and neural networks for a broad array of operational tasks across various industries. Snowflake, based in Bozeman, Montana, offers enterprise customers cloud-based data storage and analytics services running on AWS, Azure, and Google Cloud.

“This partnership will create significant time and operational efficiencies for Snowflake’s customers and solidify Snowflake as the operational data platform of choice for enterprise AI applications,” C3 AI president and chief product officer Houman Behzadi said in a statement.

Behzadi said customers would be able to get C3 AI applications on Snowflake “up and running at scale” in as little as a month.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Cyberpunk 2077 dev CD Projekt hack was apparently really bad

It seems that CD Projekt RED, once the darling of “big” indie game developers, oxymoronic as that may sound, can’t really catch a break. Loved for its The Witcher RPG series, the Polish game dev house has now become associated less favorably as a giant publisher, one that also happens to own the old Good Old Games or GOG. The disastrous launch of Cyberpunk 2077 may have been a terrible experience for many gamers but now even its own employees may be at risk after revealing that a massive data breach earlier this year may have actually been worse than they first thought.

The months leading up to that cybersecurity attack were very stressful ones for the company because of the much-delayed yet still terrible launch of Cyberpunk 2077. It’s not that hard to imagine that the attack may have even been provoked by the rather intense sentiments that were aired over the Internet during those few months. The hackers may have even been disillusioned Cyberpunk 2077 players themselves.

The hackers claimed that, among other things, they were able to get hold of company data related to accounting, HR, and investors and demanded that CD Projekt pay its ransom. The game developer naturally refused to play that game and assured its customers that their information remains safe. The same cannot be said, however, of their own employees.

Four months after the incident, CD Projekt gives an update that isn’t pleasant to hear. It now believes that the stolen data may have included “current/former employee and contractor details”, all of which may already be circulating on the Dark Web. They also can’t say for certain that the data hasn’t been manipulated or tampered with.

While it assures the public that it has taken action to make sure it doesn’t happen again, those words might not carry much weight for those that are already affected. CD Projekt promises to protect the privacy of its employees and will take action against those that will share the stolen data. Given the company’s standing in the public, those, too, might not carry much weight as well.

Repost: Original Source and Author Link

Categories
AI

Nvidia lifts curtain on AI software dev platform, new AI servers at Computex

Elevate your enterprise data technology and strategy at Transform 2021.


Nvidia on Monday announced several AI computing initiatives for enterprise AI product development and operation, including unveiling the company’s cloud-hosted Base Command AI software development platform with NetApp and dozens of new x86 servers from leading OEMs that are certified to run Nvidia AI Enterprise software.

The Santa Clara, California-based chipmaker is talking up AI at Computex 2021, which is returning to Taiwan as a hybrid in-person and virtual event this week after the cancellation of 2020’s expo due to the COVID-19 pandemic. Nvidia and the National Energy Research Scientific Computing Center (NERSC) last week flipped the “on” switch for Perlmutter, billed as the world’s fastest supercomputer for AI workloads.

The new Base Command platform gives developers access to the cloud-hosted computing power of Nvidia DGX SuperPOD AI supercomputers and NetApp data management tools. It is available to early access customers through a $90,000 monthly subscription, Nvidia said in press briefing last week.

Nvidia touted Base Command as a way for enterprise developers to “quickly move their AI projects from prototypes to production” with software designed “for large-scale, multi-user and multi-team AI development workflows hosted either on-premises or in the cloud,” the company said. The software development platform enables numerous researchers and data scientists to “simultaneously work on accelerated computing resources, helping enterprises maximize the productivity of both their expert developers and their valuable AI infrastructure,” the company said.

Base Command offers a single pane of glass interface to view AI software development on integrated monitoring and reporting dashboards, with command line APIs available. Among the AI developer tools included are Nvidia’s NGC catalog of AI and analytics software, APIs for integrating with MLOps software, and Jupyter notebooks, the chipmaker said.

New x86 servers certified for Nvidia AI Enterprise workloads

Nvidia also announced the availability of new AI-optimized servers from major computer manufacturers as part of its Nvidia-certified systems program. These new systems are certified to run Nvidia AI Enterprise software and are either available now or coming later this year from OEMs. Participating companies include Advantech, Altos, ASRock Rack, Asus, Dell Technologies, Gigabyte, Hewlett Packard Enterprise, Lenovo, Nettrix, QCT, and Supermicro.

New x86 servers based on Nvidia Ampere architecture GPUs are available now, the company said. Nvidia-certified systems using BlueField-2 DPUs will come out later this year and non-x86 machines powered by Arm CPUs will arrive in 2022.

“Enterprises across every industry need to support their innovative work in AI on traditional datacenter infrastructure,” Nvidia head of enterprise computing Manuvir Das said in a statement. “The open, growing ecosystem of Nvidia-certified systems provides unprecedented customer choice in servers validated by Nvidia to power world-class AI.”

Das said the new servers will become “some of the highest-volume x86 servers used in mainstream datacenters, bringing the power of AI to a wide range of industries, including health care, manufacturing, retail, and financial services.”

The new Nvidia-certified systems are expected to run software such as the Nvidia AI Enterprise suite of AI and data analytics software on VMware vSphere, Nvidia Omniverse Enterprise for design collaboration and advanced simulation, and Red Hat OpenShift for AI development, with additional support for Cloudera data engineering and machine learning modeling tools, Das said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Activision Says Crash Bandicoot Dev Still Working On Series

Crash Bandicoot developer Toys for Bob is now assisting Raven Software with the development of Call of Duty: Warzone, as first revealed on the team’s Twitter account. While former Toys for Bob employees indicated that layoffs had occurred behind the scenes, Call of Duty publisher Activision, which owns Toys for Bob, now disputes those reports and notes that the studio is still working on the Crash Bandicoot franchise.

“There has not been a reduction in personnel recently at the studio,” said Activision in a statement to Digital Trends. “The development team is operating fully and has a number of full-time job openings at this time. The studio is excited to continue supporting Crash Bandicoot 4: It’s About Time, and more recently, provide additional development support to Call of Duty: Warzone.”

Toys for Bob is best known for Crash Bandicoot N. Sane Trilogy and Crash Bandicoot 4: It’s About Time. The developer confirmed that it’s now working in a support role on Activision’s popular battle royale game.

Toys for Bob is proud to support development for Season 3 of Call of Duty: Warzone, and look forward to more to come,” Toys for Bob tweeted.

The news comes after a shaky year for Call of Duty: Warzone. The game was in rough shape following its integration of Black Ops Cold War last year. This integration combined both games’ progression systems and weapons, but came with a litany of bugs and balancing issues. Since then, the community has been vocal about the myriad of problems the game faced, and it was clear Raven didn’t have the bandwidth to address everything in a timely manner. Warzone remained in this shape from December 2020 until April 2021 — a long time for one of the leading battle royale games.

Video Game Chronicle reports that “virtually every studio at Activision is now working on Call of Duty.”

According to former Toys for Bob team member Nicholas Kole (as originally reported by VGC), this shift has supposedly come with layoffs. “Everyone I interfaced with and worked along was let go,” Nicholas said. “I’m very glad it’s not a [total] shuttering,” the former staff member added, in response to the news of Toys for Bob’s announcement.

Same! Altho everyone I interfaced with and worked along was let go, I’m very glad it’s not a totally shuttering

— Nicholas Kole (@FromHappyRock) April 30, 2021

Another former employee simply tweeted, “I no longer work at Toys for Bob,” with a crying emoji.

I no longer work at Toys for Bob… ???? https://t.co/58NbuvSbZz

— Blake the Non-Binary Robot (@nonbinary_robot) April 29, 2021

Despite the posts, Activision now says that no recent layoffs have occurred at the studio.

Despite the update, the news is notable considering Crash 4’s underperformance compared to recent games in the series. According to SuperData (via GamingBolt), Crash Bandicoot 4: It’s About Time sold only 402,000 digital units within its first month on the market, which was noticeably lower than the performance of Crash Bandicoot N. Sane Trilogy, which sold 520,000 digital units in a single day.

In the United States, Crash 4 peaked at the 11th spot on the NPD sales charts, while the N. Sane Trilogy debuted as the fourth-bestselling game of the month when it launched in June 2017.

This isn’t the first time Activision has shifted a studio into a support role. Following the launch of the Tony Hawk’s Pro Skater 1+2 remake, developer Vicarious Visions was moved to a support role by Activision. Vicarious Visions will no longer create original games going forward and will instead provide development help on Blizzard Entertainment titles. Fans worry this could be the fate of Toys for Bob, as well.

Editors’ Choice




Repost: Original Source and Author Link