Portal 3 may never happen, but at least we’ve got a new way to experience the original teleporting puzzle shooter. Today during his GTC keynote, NVIDIA CEO Jensen Huang announced Portal with RTX, a mod that adds support for real-time ray tracing and DLSS 3. Judging from the the short trailer, it looks like the Portal we all know and love, except now the lighting around portals bleeds into their surroundings, and just about every surface is deliciously reflective.
Similar to what we saw with Minecraft RTX, Portal’s ray tracing mod adds a tremendous amount of depth to a very familiar game. And thanks to DLSS 3, the latest version of NVIDIA’s super sampling technology, it also performs smoothly with plenty of RTX bells and whistles turned on. This footage likely came from the obscenely powerful RTX 4090, but it’ll be interesting to see how well Portal with RTX performs on NVIDIA’s older 2000-series cards. Current Portal owners will be able to play the RTX mod in November.
Huang says the company developed the RTX mod inside of its Omniverse environment. To take that concept further, NVIDIA is also launching RTX Remix, an application that will let you capture existing game scenes and tweak their objects and environments with high resolution textures and realistic lighting. The company’s AI tools can automatically give materials “physically accurate” properties—a ceiling in Morrowind, for example, becomes reflective after going through RTX Remix. You’ll be able to export remixed scenes as mods, and other players will be able to play them through the RTX renderer.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence (AI) workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart.
Ray enables machine learning (ML) models to scale across hardware resources and can also be used to support MLops workflows across different ML tools. Ray 1.0 came out in September 2020 and has had a series of iterations over the last two years.
Today, the next major milestone was released, with the general availability of Ray 2.0 at the Ray Summit in San Francisco. Ray 2.0 extends the technology with the new Ray AI Runtime (AIR) that is intended to work as a runtime layer for executing ML services. Ray 2.0 also includes capabilities designed to help simplify building and managing AI workloads.
Alongside the new release, Anyscale, which is the lead commercial backer of Ray, announced a new enterprise platform for running Ray. Anyscale also announced a new $99 million round of funding co-led by existing investors Addition and Intel Capital with participation from Foundation Capital.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
“Ray started as a small project at UC Berkeley and it has grown far beyond what we imagined at the outset,” said Robert Nishihara, cofounder and CEO at Anyscale, during his keynote at the Ray Summit.
OpenAI’s GPT-3 was trained on Ray
It’s hard to understate the foundational importance and reach of Ray in the AI space today.
Nishihara went through a laundry list of big names in the IT industry that are using Ray during his keynote. Among the companies he mentioned is ecommerce platform vendor Shopify, which uses Ray to help scale its ML platform that makes use of TensorFlow and PyTorch. Grocery delivery service Instacart is another Ray user, benefitting from the technology to help train thousands of ML models. Nishihara noted that Amazon is also a Ray user across multiple types of workloads.
Ray is also a foundational element for OpenAI, which is one of the leading AI innovators, and is the group behind the GPT-3 Large Language Model and DALL-E image generation technology.
“We’re using Ray to train our largest models,” Greg Brockman, CTO and cofounder of OpenAI, said at the Ray Summit. “So, it has been very helpful for us in terms of just being able to scale up to a pretty unprecedented scale.”
Brockman commented that he sees Ray as a developer-friendly tool and the fact that it is a third-party tool that OpenAI doesn’t have to maintain is helpful, too.
“When something goes wrong, we can complain on GitHub and get an engineer to go work on it, so it reduces some of the burden of building and maintaining infrastructure,” Brockman said.
More machine learning goodness comes built into Ray 2.0
For Ray 2.0, a primary goal for Nishihara was to make it simpler for more users to be able to benefit from the technology, while providing performance optimizations that benefit users big and small.
Nishihara commented that a common pain point in AI is that organizations can get tied into a particular framework for a certain workload, but realize over time they also want to use other frameworks. For example, an organization might start out just using TensorFlow, but realize they also want to use PyTorch and HuggingFace in the same ML workload. With the Ray AI Runtime (AIR) in Ray 2.0, it will now be easier for users to unify ML workloads across multiple tools.
Model deployment is another common pain point that Ray 2.0 is looking to help solve, with the Ray Serve deployment graph capability.
“It’s one thing to deploy a handful of machine learning models. It’s another thing entirely to deploy several hundred machine learning models, especially when those models may depend on each other and have different dependencies,” Nishihara said. “As part of Ray 2.0, we’re announcing Ray Serve deployment graphs, which solve this problem and provide a simple Python interface for scalable model composition.”
Looking forward, Nishihara’s goal with Ray is to help enable a broader use of AI by making it easier to develop and manage ML workloads.
“We’d like to get to the point where any developer or any organization can succeed with AI and get value from AI,” Nishihara said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Arm said Tuesday that ray tracing and variable rate shading will migrate from the PC to Arm-powered smartphones and tablets as part of Armv9, the next-generation CPU architecture that the company expects will power the next decade of Arm devices. Chips based upon the v9 architecture will be released in 2021, providing an estimated 30-percent improvement in performance over the next two Arm chip generations and the devices that run them.
Arm’s v9 will also add SVE2, new AI-specific instructions that will probably be used for the AI image processing used on smartphones, such as portrait mode. Arm v9 will also include what Arm is calling Realms, a hardware container of sorts specifically designed to protect virtual machines and secure applications.
As an intellectual-property licensing company, Arm enjoys a unique position in the computing industry. Phones, tablets, and servers never include chips directly made by Arm; instead, companies like Qualcomm, Samsung, Apple, and others sign licensing agreements wirh Arm, giving them the freedom to manufacture chips designed by Arm, or tweak them to create their own customized designs. Kevin Jou, the chief technology officer of Mediatek—whose chips typically appear in Chromebooks and low-end smartphones—predicted that his company will have an Arm v9 chip by the end of 2021.
Though Arm has been the engine powering smartphones for several years, Apple’s release and conversion to Arm-powered M1 Macs propelled it into the spotlight—and Apple, presumably, will incorporate the v9 architecture at some point. Arm is also making its way through an involved acquisition process by which it hopes to be purchased by Nvidia, a timeline that will overlap with the v9.
Arm’s v9 architecture will intersect with 2021’s “Matterhorn,” the successor to the Cortex-X1/A78 smartphone CPU Arm introduced in 2020, and “Makalu,” the 2022 core that follows Matterhorn. It’s the latter core that will represent the 30-percent increase, Arm said. Arm also releases a new Mali GPU every year, an Arm spokesman said.
Arm chief executive Simon Segars noted that the 30-percent performance improvement estimate was limited to instructions-per-clock improvements. If a Samsung or Apple tweaks the clock speed or the design, performance could further increase.
“What’s great about all of this CPU and system performance, is that it applies equally to notebook performance, as it does to mobile performance,” added Peter Greenhalgh, vice-president of technology at Arm, during a presentation to reporters and analysts.
That’s not a trivial claim. Arm lies at the heart of both the most powerful supercomputer in the world—Japan’s Fugaku, with 7,630,848 Arm A64FX cores—as well as Cortex-M powered wearables from Sony, Huawei, and others, and everything in between. Arm v9 spans all of that.
What Arm is trying to accomplish in v9 includes the Arm Confidential Compute Architecture, which will include the concept of Realms. Realms will provide better protection to virtual machines, which can include anything from Windows Sandbox on Windows PCs to secure banking applications on smartphones. Though hypervisors protect today’s virtual machines, they still share the same memory space. A Realm would support each virtual machine with some dedicated trusted hardware, though Arm, for now, isn’t really saying how.
Arm’s Segars noted that governments or private companies might offer, or require, digital proof of Covid vaccinations that we can carry around on our smartphones. “It might make sense for us to carry our medical information around with us in digital form, including things like allergy or medication data…but for me to get comfortable with something like that, I’d want advanced encryption running on my device, beyond what’s possible today.”
Richard Grisenthwaite, Arm’s chief architect, said that Realms will protect data even if the phone’s operating system is compromised, and that employers won’t need to issue their own smartphones because of it. Arm is also working with Google to secure the phone’s memory with special memory tagging extensions, he added.
Arm’s claims of overall performance improvements in the 30-percent range over the next two generations may be the closest we get to a sense of how Arm’s v9 cores will improve, at least for now.
Arm said that it had worked with Fujitsu and its Fugaku supercomputer designers to develop a new set of instructions, called Scalable Vector Extension 2 or SVE2, which will be used as a foundation for machine learning and digital-signal processing applications. ML can be used to streamline the processing of on-device commands, as well as virtual reality applications.
Arm said that it will be adding additional AI capabilities to its Mali GPUs and Ethos NPUs over the next few years. AI has been the magic behind “portrait mode” and other computational photography, such as on the Google Pixel lineup. Nvidia, too, will eventually bring its own capabilities to the table.
““Nvidia sees enormous opportunities to bring the transformative powers of AI deeper into gaming, autonomous vehicles, enterprise data centers and embedded devices,” Brian Kelleher, senior vice president of hardware engineering at Nvidia, said in a statement. “Through our ongoing collaboration with Arm, we look forward to using Armv9 to deliver a wide range of once unimaginable computing possibilities.”
Arm didn’t offer many details of when these features will be arriving. Still, just knowing that these technologies are on the way offers an exciting glimpse of the future ahead in smartphones and Arm-powered PCs.
Updated at 11:22 AM with a quote from Nvidia.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
Nvidia has announced the RTX 3050 Ti and RTX 3050, new graphics cards for affordable gaming laptops. These two new graphics cards will launch in laptops starting at $799 and come with support for the latest RTX 30-series features, such as RTX ray tracing and DLSS (deep learning super sampling).
Some of the first laptops to use these new graphics include the Dell XPS 15 and Lenovo Legion 5i.
Nvidia’s goal with these new gaming laptops is to hit over 60 frames per second (fps) in 1080p gaming at Medium settings with both ray tracing and DLSS turned on. They are an update to the GTX 1650 Ti and 1650, Nvidia’s entry-level option for gaming laptops. These 16-series cards were made in place of an RTX 2050, as performance levels weren’t currently high enough to handle RTX effects. While the GTX cards technically support ray tracing, they lacked dedicated hardware for ray tracing — unlike the RTX 3050 Ti and 3050.
Samsung initially spoiled the surprise last week when it unveiled the ultraslim Galaxy Book Pro Odyssey and the mysterious RTX 3050 Ti graphics card inside. Now ,we have the full details on the RTX 3050 Ti and 3050, which Nvidia shared them with the press ahead of its announcement.
RTX 3050 Ti
GTX 1650 Ti
GTX 1660 Ti
The RTX 3050 Ti and RTX 3050 are a clear upgrade from the GTX 1650 Ti. Not only is the overall CUDA core count bump up by 60%, the addition of RT cores and Tensor cores means much better ray tracing and DLSS performance. Nvidia didn’t provide information on the base frequency of these two new graphics cards, but did offer a range of boosting frequencies.
Based on those same factors, the RTX 3050 Ti and 3050 should also outperform the GTX 1660 Ti. The 4GB of GDDR6 video memory with a 128-bit bus, however, puts the RTX 3050 Ti at a disadvantage to the GTX 1660 Ti. Nvidia didn’t offer any direct performance comparisons against the GTX 1660 Ti.
Instead, Nvidia focused on the comparison between the RTX 3050 Ti and the GTX 1650 Ti, specifically in five games: Call of Duty: Warzone, Outriders, Control (RT), Watchdogs Legion (RT), and Minecraft (RTX). Only two of these titles, Call of Duty and Outriders, are what you’d call “fair,” non-ray tracing comparisons. Nvidia didn’t provide specific frame rates, but there’s around a 15% jump in performance. This is without DLSS turned on, which otherwise helps performance.
Of course, if you want to try out some ray tracing, the RTX 3050 Ti allows you to play games at over 60 fps, so long as DLSS is always turned on. Playing Minecraft with RTX, which uses path tracing, is basically a nonstarter on the GTX 1650 Ti.
I imagine most gamers are more interested in how the RTX 3050 Ti handles lighter esports titles without heavy ray tracing effects, but we won’t know exactly how they perform until we can test the systems ourselves.
The first Nvidia RTX 3050 Ti and 3050 laptops will go on sale May 11.
Nvidia has also announced an update to its Nvidia Studio laptops in the form of new drivers for laptops like the Dell XPS 17, HP ZBook Studio, Asus Zephyrus M16, Lenovo IdeaPad 5i Pro, MSI Creator Z16, and many more. These laptops now have support for RTX 30-series graphics cards and should see a significant benefit to content creation performance.
The Nvidia graphics card launch also lines up with new mobile processors from Intel, the 45-watt H-series chips in the 11th-gen Tiger Lake line. Other gaming laptops such as the Razer Blade 15 received significant updates to these latest processors, as well as added some new features such as a 1080p webcam.
If you skipped yesterday’s deal on a killer HP gaming laptop, Walmart has an alternative lights-out option for you today. The big box retailer is selling the Dell G5 15 5590 gaming laptop for just $1149Remove non-product link, packed with a GeForce RTX graphics card and Intel’s current high-end gaming processor. That’s $250 under the MSRP and a very good deal for a laptop with these specs.
The biggest attraction to this laptop is the Nvidia GeForce RTX 2060 graphics card. Nvidia’s cutting-edge GPU comes equipped with dedicated hardware for real-time ray tracing, a new graphics feature that greatly enhances games that support it. Ray tracing more accurately mimics the way light behaves in the real world. That may not sound like a big deal, but the result is all-around better visual detail, such as clearer reflections in still water, better shading, and more vivid explosions. Nvidia’s complementary DLSS feature taps dedicated AI cores to reduce the performance impact created by real-time ray tracing.
In general, the RTX 2060 is a great choice for no-compromises gaming on the Dell G5’s 1080p display. Crank up those visual settings and bask in all the sweet, sweet frames. You could also hook the laptop up to a 1440p desktop monitor and get a great gaming experience.
The Dell G5 15 5590 is rocking a 15.6-inch display, a six core, twelve thread Core i7-9750H, 16GB of RAM, and a 128GB SSD. The paucity of storage is the biggest downside for this laptop. If you’re thinking about swapping it out, be sure to check out our top guide to the best SSDs.
[Today’s deal: Dell G5 15 5590 for $1149 at Walmart.Remove non-product link]
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
Ian is an independent writer based in Israel who has never met a tech subject he didn’t like. He primarily covers Windows, PC and gaming hardware, video and music streaming services, social networks, and browsers. When he’s not covering the news he’s working on how-to tips for PC users, or tuning his eGPU setup.
Ray tracing is a lighting technique that brings an extra level of realism to games. It emulates the way light reflects and refracts in the real world, providing a more believable environment than what’s typically seen using the static lighting in more traditional games. But what is ray tracing, exactly? And more importantly, how does it work?
A good graphics card can use ray tracing to enhance immersion, but not all GPUs can handle this technique. Read on to decide if ray tracing is essential to your gaming experience and if it justifies spending hundreds on an upgraded GPU.
To understand just how ray tracing’s revolutionary lighting system works, we need to step back and understand how games previously rendered light and what needs to be emulated for a photorealistic experience.
Games without ray tracing rely on static “baked in” lighting. Developers place light sources within an environment that emit light evenly across any given view. Moreover, virtual models like NPCs and objects don’t contain any information about any other model, requiring the GPU to calculate light behavior during the rendering process. Surface textures can reflect light to mimic shininess, but only light emitted from a static source. Take the comparison of reflections in GTA Vbelow as an example.
Overall, the GPU’s evolution has helped this process become more realistic in appearance over the years, but games still aren’t photorealistic in terms of real-world reflections, refractions, and general illumination. To accomplish this, the GPU needs the ability to trace virtual rays of light.
In the real world, visible light is a small part of the electromagnetic radiation family perceived by the human eye. It contains photons that behave both as a particle and as a wave. Photons have no real size or shape — they can only be created or destroyed.
That said, light could be identified as a stream of photons. The more photons you have, the brighter the perceived light. Reflection occurs when photons bounce off a surface. Refraction occurs when photons — which travel in a straight line — pass through a transparent substance and the line is redirected, or “bent.” Destroyed photons can be perceived as “absorbed.”
Ray tracing in games attempts to emulate the way light works in the real world. It traces the path of simulated light by tracking millions of virtual photons. The brighter the light, the more virtual photons the GPU must calculate, and the more surfaces it will reflect, refract, and scatter off and from.
The process isn’t anything new. CGI has used ray tracing for decades, though the process required farms of computers in the early days to generate a full movie given a single frame could take hours or even days to render. Now home PCs can emulate ray-traced graphics in real time, leveraging hardware acceleration and clever lighting tricks to limit the number of rays to a manageable number.
But here’s the real eye-opener. Like any movie or TV show, scenes in CGI animation are typically “shot” using different angles. For each frame, you can move a camera to capture the action, zoom in, zoom out, or pan an entire area. And like animation, you must manipulate everything on a frame-by-frame basis to emulate movement. Piece all the footage together and you have a flowing story.
In games, you control a single camera that’s always in motion and always changing the viewpoint, especially in fast-paced games. In both CGI and ray-traced games, the GPU not only must calculate how light reflects and refracts in any given scene, but it also must calculate how it’s captured by the lens — your viewpoint. For games, that’s an enormous amount of computational work for a single PC or console.
Unfortunately, we still don’t have consumer-level PCs that can truly render ray-traced graphics at high framerates. Instead, we now have hardware that can cheat effectively.
Let’s get real
Ray tracing’s fundamental similarity to real life makes it an extremely realistic 3D rendering technique, even making blocky games like Minecraft look near photo-realistic in the right conditions. There’s just one problem: It’s extremely hard to simulate. Recreating the way light works in the real world is complicated and resource-intensive, requiring masses of computing power.
That’s why existing ray-tracing options in games, like Nvidia’s RTX-driven ray tracing, aren’t true to life. They’re not true ray tracing, whereby every point of light is simulated. Instead, the GPU “cheats” by using several smart approximations to deliver something close to the same visual effect, but without being quite as taxing on the hardware. This will likely change in future GPU generations, but for now, this is a step in the right direction.
Most ray tracing games now use a combination of traditional lighting techniques, typically called rasterization, and ray tracing on specific surfaces such as reflective puddles and metalwork. Battlefield V is a great example of that. You see the reflection of troops in water, the reflection of terrain on airplanes, and the reflection of explosions across a car’s paint. It’s possible to show reflections in modern 3D engines, but not at the level of detail shown in games like Battlefield V when ray tracing is enabled.
Ray tracing can also be leveraged for shadows to make them more dynamic and realistic looking. You’ll see that used to great effect in Shadow of the Tomb Raider.
Ray-traced lighting can create much more realistic shadows in dark and bright scenes, with softer edges and greater definition. Achieving that look without ray tracing is extraordinarily hard. Developers can only fake it through careful, controlled use of preset, static light sources. Placing all these “stage lights” takes a lot of time and effort — and even then, the result isn’t quite right.
Some games go the whole hog and use ray tracing for global illumination, effectively ray-tracing an entire scene. But that’s the most computationally expensive and needs the most powerful of modern graphics cards to run effectively. Metro Exodus uses this technique but the implementation isn’t perfect.
Because of that, half-measures like only ray-tracing shadows or reflective surfaces are popular. Other games leverage Nvidia technologies like denoising and Deep Learning Super Sampling to improve performance and to cover up some of the visual hiccups that occur from rendering fewer rays than would be necessary to create a truly ray-traced scene. Those are still reserved for pre-rendered screenshots and movies where high-powered servers can spend days rendering single frames.
The hardware behind the rays
To handle even these relatively modest implementations of ray tracing, Nvidia’s RTX 20-series graphics cards introduced hardware specifically built for ray tracing.
Nvidia’s Turing architecture — featured on 20-series GPUs — introduced RT cores alongside Nvidia’s CUDA and Tensor cores. RT cores are solely there to handle real-time ray tracing. In Turing cards, the RT cores performed decently, but it wasn’t until the recent Ampere launch where we saw them shine.
Nvidia released a breakdown of generating a single frame of Metro Exodus, where it showed how the rendering pipeline is laid out and how it is affected by ray tracing. While an RTX 2080 and GTX 1080 Ti might be roughly comparable in performance for non-ray-traced games, when ray tracing is applied to a scene, it can take much longer for the 1080 Ti, without the dedicated RT cores, to generate the same image.
The dedicated RT cores in RTX 20-series GPUs were a big selling point, but they didn’t quite deliver the performance Nvidia suggested. Even last-gen’s 2080 Ti struggled in supported ray tracing titles upon launch. The new RTX 3080 and 3090 feature newer RT cores, however, and the performance improvement is clear. Not only are these cards faster than their last-gen counterparts, but the new RT cores are faster, too. In many ways, the RTX 30-series cards feel like the GPUs Nvidia was promising with RTX in the first place.
Nvidia’s ray tracing method isn’t the only option available, however. There are also Reshade “path tracing” post-processing effects that deliver comparable visuals without anything like the same performance hit.
AMD has options for ray tracing now, too, which we’ll get to next.
You’ll still want a powerful graphics card for ray tracing no matter the implementation, but as the technique catches on with game developers, we may see a broader array of supporting hardware at much more affordable prices.
What about AMD?
AMD has struggled over the past few years to deliver hardware-accelerated ray tracing. All we had was a Cryengine demo that could produce ray-traced reflections on a Vega 56. That’s changing with the upcoming launch of the RX 6800, 6800 XT, and 6900 XT, however. These new cards feature DriectX 12 ray tracing support, and although leaked benchmarks suggest AMD isn’t quite on Nvidia’s level in the ray-tracing department, AMD’s new cards should still perform better than RTX 20-series GPUs (read our RX 6800 XT vs. RTX 3080 and RX 6900 XT vs. RTX 3090 comparisons for more).
That hardly comes as a surprise considering the Big Navi architecture powering AMD’s RX 6000 cards. This same architecture is what powers the visuals in the PS5 and Xbox Series X, both of which feature hardware-accelerated ray tracing. We still have to wait for third-party benchmarks to validate the performance of ray tracing with AMD’s upcoming cards. However, because we’re seeing ray tracing as a standout feature on next-gen consoles, we expect better support and optimizations moving forward.
How can you see ray tracing at home?
You’ll need a recent — and expensive — graphics card to see ray tracing at home. Hardware-accelerated ray tracing is only available on Nvidia RTX 20-series and 30-series GPUs, and on AMD’s RX 6000-series GPUs (GTX 10-series and 16-series cards support ray tracing but lack RT cores). The RX 6000 cards aren’t out yet, RTX 30-series cards are likely to stay out of stock until 2021, and RTX 20-series cards have reached end of life, meaning stock is quickly dwindling from retailers. In late 2020, there aren’t many options, but that should hopefully change in the next few months. Once you’ve secured a graphics card, the rest is simple.
If you expect to play at resolutions above 1080p and with frame rates of 60 FPS or more, your best bet is to splurge for a top-of-the-line graphics card. At 4K, the RTX 3080 and RX 6800 XT are the standout cards, but you can get by with an RTX 3070 or RX 6800 if you’re willing to move to 1440p in certain titles.
When it comes to ray tracing-enabled games, the selection is still limited but growing at a good pace. To see some of the top examples of ray tracing at work, look at a few of the early RTX demo games, such as Battlefield V, Shadow of the Tomb Raider, and Metro Exodus. More recent games like Control and MechWarrior 5: Mercenaries, also look compelling. Stay in the Light is a fantastic example of an indie horror game completely built around the utilization of ray-traced reflections and shadows. You can also work your way through the remastered Quake II with RTX ray tracing.
There are not as many ray tracing games on the market that are easily accessible today, but the industry is still growing. The PS5 and Xbox Series X are beginning to advertise ray tracing as one of the selling points, so it is only a matter of time before other top competitors begin to follow. The multi-platform game Watch Dogs 2 is different from the new Watch Dogs: Legion as the new game has initiated ray tracing to work on consoles and computers.