Valve’s latest monthly report might give a clue that the PC industry is recovering from its component shortage that has dragged on for over two years.
In its monthly user survey, Valve’s data shows that gamers accessing Steam were doing so from PCs running a higher number of Nvidia Ampere graphics cards. The RTX 3080 graphics card saw a 0.24% increase in usage during the month of May, while the RTX 3070 GPU saw a 0.19% increase.
Of course, cheaper and more readily accessible cards still remain the most common, but the growth is certainly encouraging to see. It coincides with recent drops in GPU pricing and increases in supply.
AMD components have also made an appearance on the survey, with the RTX 3060 increasing 0.18% and the Radeon RX 6800 XT increasing 0.15%.
These stats are pivotal as they hint that more and more gamers are getting access to these high-end GPUs, both as desktop parts and in gaming laptops, which have been scarce in recent months.
The overall graphics card increase has also seen AMD gain some market share against the consumer favorite, Intel at 1.24%. However, the competitor remains in a staunch lead with an overall 67.19% of users.
On the CPU side, the survey also uncovered that processors on the PCs used on Steam have steadily increased from four cores to six cores over the last five years, as noted by PCWorld.
Previously, a four-core CPU was the most common, but this May survey continues to show growth in higher core count processors. Approximately 33% of PCs on Steam were running four-core CPUs, while over 50% of PCs were running six-core systems.
Both Intel and AMD have introduced high-core CPUs beyond six, with even eight, 12, and 16 cores to its mid-range component line. Still, six seems to be the new standard for the average PC gamer, especially now that Intel has increased its CPU’s core count with the 12th-gen Alder Lake chips.
Lastly, Valve’s survey continues to show increases in Windows 11. The update is 0.41% away from being installed on every five PCs. Additionally, over 50% of those surveyed had 16GB of RAM.
Xbox chief Phil Spencer has given the strongest hint yet that Elder Scrolls VI won’t be coming to PlayStation. Since Microsoft bought Bethesda Softworks owner ZeniMax Media earlier this year, questions have been swirling around platform exclusivity for ongoing franchises, such as Elder Scrolls, Doom and Fallout.
Spencer previously noted exclusivity would be decided on a “case-by-case basis.” He later said Microsoft would “continue to invest in communities of players” and that “there might be things that have either contractual things or legacy on different platforms that we’ll go do.” Still, he was adamant that the ZeniMax/Bethesda deal was largely about “delivering great exclusive games.”
Now, in an interview with GQ to mark the 20th anniversary of Xbox, Spenser strongly hinted that, like next year’s Starfield, Elder Scrolls VI will be locked to Xbox consoles, PC and Xbox Cloud Gaming. “It’s not about punishing any other platform, like I fundamentally believe all of the platforms can continue to grow,” he said. “But in order to be on Xbox, I want us to be able to bring the full complete package of what we have. And that would be true when I think about Elder Scrolls VI. That would be true when I think about any of our franchises.”
That won’t exactly inspire confidence among, say, Skyrim fans that they’ll get to play Elder Scrolls VI on PlayStation and Switch. In any case, with Bethesda’s focus on Starfield, it might be a while before it divulges more details about the next Elder Scrolls game.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Epic Games has sent out another survey to Fortnite players to evaluate their interest in potential future crossovers, shedding light on some of the IP that may be in the pipeline for consideration. This isn’t the first time we’ve seen a survey like this, but it’s hard to guess what comes next.
Epic regularly surveys players about various things, including their experiences in recent matches and their perspectives on various firearms and items. Sometimes, these surveys instead focus on crossovers. We’ve seen some past potential crossovers become actual in-game characters, skins, and more.
Yesterday, Epic sent me a survey in my emails asking for my opinion on many characters/people/franchises.
I’ll just post some of the most interesting ones here! ????
Remember that Guggimon (Season 7 BP) was also in one of these surveys before he came into the game! ???? pic.twitter.com/4il4WJJMiL
As with previous examples, the new survey offers a huge number of potential crossovers covering popular IP across television, video games, and cartoons, as well as celebrities and athletes. It’s hard to tell which listed items are actually in consideration and which may be included to simply obscure the actual potential crossovers — if that’s the case, of course.
Listed items include popular games like Among Us, Yoshi’s Island, Final Fantasy, Clash of Clans, and Uncharted. These are joined by movies like Lord of the Rings, Nightmare on Elm Street, Scream, The Matrix, and Independence Day.
Some of the items on the list have already appeared in the game — it includes Predator, for example, which was already a crossover. Iron Man is also listed as a potential movie crossover and the character has likewise already had a Fortnite crossover. We may never see any of these listed items become Fortnite crossovers, but the survey does indicate that an end to these tie-ins is nowhere in sight.
This week the folks at Epic updated Fornite to version 17.20, complete with Preferred Item Slots. If you’re looking to use Preferred Item Slots, open settings, tap the Game tab, and bang! There it is. It is recommended that you take a peek at this feature before you drop in to your next game as it is ON BY DEFAULT.
For more information on this system, drop in to our recent peek at item slots for Fortnite. This might change the way you play the game. Not much else changed in this newest update in the LIVE game, but several bits leaked in a big way. For example the Grab-Itron weapon. This weapon works like a UFO Traction Beam, allowing you to pull in objects, pick them up, and toss them.
As shown above in a demo by Makks, the versatility of the Grab-Itron weapon is impressive. The size of the object grabbed and thrown and the material with which the object is made affect the damage dealt by the toss!
Now, if only the weapon would allow the user to collect more stuff and roll all of said stuff into a ball. Imagine a Fortnite where we’re living like Katamari Damacy, picking up items and making larger and larger balls, smashing everything that gets in the way of said balls. Wouldn’t that be neat? Or would that be a different game entirely? In any case, the Grab-Itron is in game files now, and will likely be launched in Fortnite in the imminent future.
There’ll also be a new in-game event in Fortnite, coming (hopefully) pretty soon. As harvested by iFireMonkey, new “Special Event Countdown” imagery appeared in the game’s most recent update files. You’ll see one of those new graphics in the hero image in the article you’re reading now! Cross your fingers for an event that’s both wild and exciting!
A Microsoft support page is hinting at the end of the life for Windows 10 Professional and Home editions, just a week ahead of the company’s “what’s next for Windows event.” The page lists the “retirement” as October 14, 2025, which is a little over 10 years after the operating system was first released.
Though many believe this could mark the end of Windows 10, this support page has been around since 2015, according to some users on Reddit. However, Microsoft watcher Paul Thurrott notes that the page has been edited, marking one of the first times Microsoft has specifically mentioned the end of support for Windows 10.
With Windows 10 being a “service” and getting twice-a-year updates, all previous support pages only mentioned when a Windows 10 version would leave support. Those include Windows versions like the Windows 10 Creators Update, Windows 10 Fall Creators Update, and older versions of the the Windows operating system.
The 10-year support period hinted at on the support page should not be too surprising, though, especially if a new Windows version is on the way as rumors indicate. Microsoft is known to end support for an operating system after such a time frame and then give extended support to users who pay for it.
Windows XP was supported for well over 10 years, from August 2001 all the way through April 2014. Windows 7, meanwhile, was supported from October 2009 all the way through the end of January 2020. It won’t be too surprising if Windows 10 lasts just as long, going from July 2015 all the way to October 2025 before setting the stage for what could be Windows 11.
Along with this support page, there’s lots of indications that Windows 11 might indeed be the real follow-up to Windows 10. Microsoft has given hints, one of which can be seen in the reflection of a window in a photo featured in the media invites for the June 24 Windows event. Another is an 11-minute long YouTube video that remixes all the Windows startup sounds. Even Microsoft’s executives have been teasing “next-generation Windows” at previous Microsoft media events, including both Surface chief Panos Panay and Microsoft CEO Satya Nadella himself.
One of the biggest highlights of Build, Microsoft’s annual software development conference, was the presentation of a tool that uses deep learning to generate source code for office applications. The tool uses GPT-3, a massive language model developed by OpenAI last year and made available to select developers, researchers, and startups in a paid application programming interface.
Many have touted GPT-3 as the next-generation artificial intelligence technology that will usher in a new breed of applications and startups. Since GPT-3’s release, many developers have found interesting and innovative uses for the language model. And several startups have declared that they will be using GPT-3 to build new or augment existing products. But creating a profitable and sustainable business around GPT-3 remains a challenge.
Microsoft’s first GPT-3-powered product provides important hints about the business of large language models and the future of the tech giant’s deepening relation with OpenAI.
A few-shot learning model that must be fine-tuned?
According to the Microsoft Blog, “For instance, the new AI-powered features will allow an employee building an e-commerce app to describe a programming goal using conversational language like ‘find products where the name starts with “kids.”’ A fine-tuned GPT-3 model [emphasis mine] then offers choices for transforming the command into a Microsoft Power Fx formula, the open source programming language of the Power Platform.”
I didn’t find technical details on the fine-tuned version of GPT-3 Microsoft used. But there are generally two reasons you would fine-tune a deep learning model. In the first case, the model doesn’t perform the target task with the desired precision, so you need to fine-tune it by training it on examples for that specific task.
In the second case, your model can perform the intended task, but it is computationally inefficient. GPT-3 is a very large deep learning model with 175 billion parameters, and the costs of running it are huge. Therefore, a smaller version of the model can be optimized to perform the code-generation task with the same accuracy at a fraction of the computational cost. A possible tradeoff will be that the model will perform poorly on other tasks (such as question-answering). But in Microsoft’s case, the penalty will be irrelevant.
In either case, a fine-tuned version of the deep learning model seems to be at odds with the original idea discussed in the GPT-3 paper, aptly titled, “Language Models are Few-Shot Learners.”
Here’s a quote from the paper’s abstract: “Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches.” This basically means that, if you build a large enough language model, you will be able to perform many tasks without the need to reconfigure or modify your neural network.
So, what’s the point of the few-shot machine learning model that must be fine-tuned for new tasks? This is where the worlds of scientific research and applied AI collide.
Academic research vs commercial AI
There’s a clear line between academic research and commercial product development. In academic AI research, the goal is to push the boundaries of science. This is exactly what GPT-3 did. OpenAI’s researchers showed that with enough parameters and training data, a single deep learning model could perform several tasks without the need for retraining. And they have tested the model on several popular natural language processing benchmarks.
But in commercial product development, you’re not running against benchmarks such as GLUE and SQuAD. You must solve a specific problem, solve it ten times better than the incumbents, and be able to run it at scale and in a cost-effective manner.
Therefore, if you have a large and expensive deep learning model that can perform ten different tasks at 90 percent accuracy, it’s a great scientific achievement. But when there are already ten lighter neural networks that perform each of those tasks at 99 percent accuracy and a fraction of the price, then your jack-of-all-trades model will not be able to compete in a profit-driven market.
Here’s an interesting quote from Microsoft’s blog that confirms the challenges of applying GPT-3 to real business problems: “This discovery of GPT-3’s vast capabilities exploded the boundaries of what’s possible in natural language learning, said Eric Boyd, Microsoft corporate vice president for Azure AI. But there were still open questions about whether such a large and complex model could be deployed cost-effectively at scale to meet real-world business needs [emphasis mine].”
And those questions were answered with the optimization of the model for that specific task. Since Microsoft wanted to solve a very specific problem, the full GPT-3 model would be an overkill that would waste expensive resources.
Therefore, the plain vanilla GPT-3 is more of a scientific achievement than a reliable platform for product development. But with the right resources and configuration, it can become a valuable tool for market differentiation, which is what Microsoft is doing.
In an ideal world, OpenAI would have released its own products and generated revenue to fund its own research. But the truth is, developing a profitable product is much more difficult than releasing a paid API service, even if your company’s CEO is Sam Altman, the former President of Y Combinator and a product development legend.
And this is why OpenAI enrolled the help of Microsoft, a decision that will have long-term implications for the AI research lab. In July 2019, Microsoft made a $1 billion investment in OpenAI—with some strings attached.
From the OpenAI blog post that declared the Microsoft investment: “OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus [emphasis mine]. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.”
Alone, OpenAI would have a hard time finding a way to enter an existing market or create a new market for GPT-3.
On the other hand, Microsoft already has the pieces required to shortcut OpenAI’s path to profitability. Microsoft owns Azure, the second-largest cloud infrastructure, and it is in a suitable position to subsidize the costs of training and running OpenAI’s deep learning models.
But more importantly—and this is why I think OpenAI chose Microsoft over Amazon—is Microsoft’s reach across different industries. Thousands of organizations and millions of users are using Microsoft’s paid applications such as Office, Teams, Dynamics, and Power Apps. These applications provide perfect platforms to integrate GPT-3.
Microsoft’s market advantage is fully evident in its first application for GPT-3. It is a very simple use case targeted at a non-technical audience. It’s not supposed to do complicated programming logic. It just converts natural language queries into data formulas in Power Fx.
This trivial application is irrelevant to most seasoned developers, who will find it much easier to directly type their queries than describe them in prose. But Microsoft has plenty of customers in non-tech industries, and its Power Apps are built for users who don’t have any coding experience or are learning to code. For them, GPT-3 can make a huge difference and help lower the barrier to developing simple applications that solve business problems.
Microsoft has another factor working to its advantage. It has secured exclusive access to the code and architecture of GPT-3. While other companies can only interact with GPT-3 through the paid API, Microsoft can customize it and integrate it directly into its applications to make it efficient and scalable.
By making the GPT-3 API available to startups and developers, OpenAI created an environment to discover all sorts of applications with large language models. Meanwhile, Microsoft was sitting back, observing all the different experiments with growing interest.
The GPT-3 API basically served as a product research project for Microsoft. Whatever use case any company finds for GPT-3, Microsoft will be able to do it faster, cheaper, and with better accuracy thanks to its exclusive access to the language model. This gives Microsoft a unique advantage to dominate most markets that take shape around GPT-3. And this is why I think most companies that are building products on top of the GPT-3 API are doomed to fail.
The OpenAI Startup Fund
And now, Microsoft and OpenAI are taking their partnership to the next level. At the Build Conference, Altman declared a $100 million fund, the OpenAI Startup Fund, through which it will invest in early-stage AI companies.
“We plan to make big early bets on a relatively small number of companies, probably not more than 10,” Altman said in a prerecorded video played at the conference.
What kind of companies will the fund invest in? “We’re looking for startups in fields where AI can have the most profound positive impact, like healthcare, climate change, and education,” Altman said, to which he added, “We’re also excited about markets where AI can drive big leaps in productivity like personal assistance and semantic search.” The first part seems to be in line with OpenAI’s mission to use AI for the betterment of humanity. But the second part seems to be the type of profit-generating applications that Microsoft is exploring.
Also from the fund’s page: “The fund is managed by OpenAI, with investment from Microsoft and other OpenAI partners. In addition to capital, companies in the OpenAI Startup Fund will get early access to future OpenAI systems, support from our team, and credits on Azure.”
So, basically, it seems like OpenAI is becoming a marketing proxy for Microsoft’s Azure cloud and will help spot AI startups that might qualify for acquisition by Microsoft in the future. This will deepen OpenAI’s partnership with Microsoft and make sure the lab continues to get funding from the tech giant. But it will also take OpenAI a step closer toward becoming a commercial entity and eventually a subsidiary of Microsoft. How this will affect the research lab’s long-term goal of scientific research on artificial general intelligence remains an open question.
Is it just me or does anyone else feel OpenAI looks more and more like a Microsoft subsidiary? #MSBuild
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.
It looks like Google is preparing to roll out a couple of new Fitbit features, including the ability to detect snoring while you sleep. The functionality was found within the latest Fitbit app update for Android, though it isn’t yet available to users. As well, the code revealed an upcoming noise detection feature that listens for ambient noises while you sleep.
The latest version of the Fitbit app started rolling out to users on Android yesterday. The folks at 9to5Google did an APK teardown and found evidence of an unannounced feature called Snore & Noise Detect. With that, a Fitbit wearable will be able to monitor nighttime noises using its built-in microphone, using it to detect noises and snoring.
The feature description explains that Fitbit will find a baseline noise level at night, then use an algorithm to detect whether louder noises are from snoring or something else. Users will see their (or their partner’s) snoring frequency listed in one to three categories: none to mild, moderate, and frequent. Those who are ‘frequent’ snorers are snoring 40-percent or more of their time spent asleep.
The microphone will also provide a label for the general noise level in your sleeping environment — including snoring — with the range going from ‘very quiet’ to ‘very loud.’ Users will, based on the code found in the app, advise users to charge their wearable before going to bed if they plan to use the sleep noise monitoring feature due to the battery demands.
The microphone noise monitoring will start once the wearable determines that the user has fallen asleep. 9to5Google also found evidence that Google is working on a ‘sleep animal’ feature for Fitbit, one that will possibly include things like a tortoise, dolphin, kangaroo, bear, hummingbird, and giraffe. Users may also soon get sleeping profile designations like ‘short sleeper’ or ‘restless sleeper,’ but when these features may arrive remains unclear.
Samsung arguably has the foldable phone market cornered but it still has a lot of work to do to polish up both hardware and software. On the one hand, it still has to make those devices more reliable and resilient to wear and tear, especially the fragile flexible screen. On the other hand, it still has to fully take advantage of the form factor that these devices open up. A new leak suggests that Samsung is working on an adaptive user interface that will be dubbed “Split UI” and it might not just be for foldables but also for rollables.
Foldable phones that turn into tablets aren’t just phones or tablets. They introduce a new form factor that can have its own interaction language, one that revolves around split-screen interfaces. While Android does have native support for running two apps side-by-side, it isn’t yet designed to take advantage of that though it is indeed heading in that direction.
In the meantime, however, Samsung might also be preparing its own solution, one that famed tipster @Ice universe has revealed to be called “Split UI”. Rather than being specifically designed for foldables, however, this adaptive UI seems to cater to any and all form factors that Samsung’s devices sport, from normal “candy bar” to tablet to foldable.
Curiously, this is also giving rise to speculation about Samsung’s teased rollable phone plans. Without taking the video clips literally, it does show how a Split UI will adapt when a phone’s or tablet’s screen is extended, just like on a rollable device.
The clips also show how the UI can even be customized by sliding the separator between different panes of the same app. Samsung has admittedly been quite good at making UI experiments like this, even before Android officially adopted them, and it will be interesting to actually see it in action, perhaps in the Galaxy Z Fold 3 coming in August.
Spotify, once a platform where you could only stream music, has made a big push into the podcast market, including the relatively recent launch of its own originals and exclusive shows. We’ve heard hints over the past several months that the company may be planning to do something similar with audiobooks, and a newly announced partnership seems to reinforce that idea.
Audiobooks, much like podcasts, have seen a surge in popularity as people look for ways to stay informed and entertained while away from home. Audible is arguably the largest source of audiobooks, but many other platforms are also available, one of which is Storytel.
Spotify has announced a new partnership with Storytel that will allow its users to link their audiobook account with the Spotify app, removing the need to use two different apps to access their favorite music and audiobooks. The integration is made possible by Spotify’s recently announced Open Access Platform, and it’ll be available to Storytel users later this year.
Storytel is available in 25 countries and offers access to a huge library of audiobook content. That content will remain on the Storytel platform, mind — the big deal here is that by linking the two accounts, subscribers will be able to access that audiobook content within the Spotify app.
Though this isn’t a big deal for people who don’t use Storytel, it does hint at potential future deals that may bring more audiobooks and audiobook platforms to Spotify. The audio streaming company has itself dabbled with audiobook content, namely with the release of nine public domain audiobooks back in January.
The latest APK of the game Pokemon GO revealed information about the near-future of the game, including the possibility of a launch of Larvesta and Volcarona! A teardown of the game’s version 0.209.0 revealed new move effects, a fix for Rocket Radar, a Global Challenge text change, changes to image assets, Shadow Pokemon effect color changes, and changes to notifications for contacts joining the game. This update also added Pokemon Party changes and Quest Branching as a new sort of quest type.
Courtesy of PokeMiners on The Silph Road, we can see how this newest version of Pokemon GO has codes relating to Raid Passes. It would appear that if a user has a missing Remote Raid Pass – somehow – telemetry is now sent from the in-game shop to the user (and/or vice-versa). This will likely fix any issues users had with missing passes in the past.
This update adds Tags Instructions for the average everyday user. This should make the use of Tags for Pokemon a whole lot more evident and simple – especially for those that’ve never used the system in the first place.
An added code references an “alternative Battery Saver Service.” It’s possible this is part of a test, only – but it might appear as a new option once the code goes live for all users.
This update added a modified Rocket Radar image asset that should fix the Rocket Radar experience. The white squares… situation… should be fixed when this update goes live.
Party suggestions will change with this update thanks to a few code changes for Gym and Raid Battle Party Size, Max Party Name Lengths, and “Party Page Title.”
A slight change to the text shown for the most recent Global Challenge removes the note about how the reward will activate. Another slight update allows the game to change the color of Shadow Pokemon effects.
Code referencing quest types now includes Quest Branching. There’s a “GetQuestDescription” and an “invalid branch” as well as “Branching_Quest” in the mix. Also appearing is a simple “contact_signed_up” – which could get annoying.
Larvesta and Volcarona
If you’re looking for Larvesta and Volcarona, now’s the time to get a little excited. Included in the code for this latest update to Pokemon GO are two moves (FX only). We see the FX for attack Fairy Wind Fast and Fiery Dance. If we follow what’s presented in the main series Pokemon games, the move Fiery Dance can ONLY be learned by the Pokemon Volcarona!
Volcarona is the most evolved form of Larvesta, a Pokemon first introduced in Generation V. It’s entirely possible – not verified, but possible – that this Bug/Fire-type Pokemon will be revealed with an event for the summer Solstice – this June!