Categories
Game

Meta’s latest VR headset prototypes could help it pass the ‘Visual Turing test’

Meta wants to make it clear it’s not giving up on high-end VR experiences yet. So, in a rare move, the company is spilling the beans on several VR headset prototypes at once. The goal, according to CEO Mark Zuckerberg, is to eventually craft something that could pass the “visual Turing Test,” or the point where virtual reality is practically indistinguishable from the real world. That’s the Holy Grail for VR enthusiasts, but for Meta’s critics, it’s another troubling sign that the company wants to own reality (even if Zuckerberg says he doesn’t want to completely own the metaverse).

As explained by Zuckerberg and Michael Abrash, Chief Scientist of Meta’s Reality Labs, creating the perfect VR headset involves perfecting four basic concepts. First, they need to reach a high resolution so you can have 20/20 VR vision (with no need for prescription glasses). Additionally, headsets need variable focal depth and eye tracking, so you can easily focus on nearby and far away objects; as well as fix optical distortions inherent in current lenses. (We’ve seen this tech in the Half Dome prototypes.) Finally, Meta needs to bring HDR, or high dynamic range, into headsets to deliver more realistic brightness, shadows and color depth. More so than resolution, HDR is a major reason why modern TVs and computer monitors look better than LCDs from a decade ago.

Meta Reality Labs VR headset prototypes

Meta

And of course, the company needs to wrap all of these concepts into a headset that’s light and easy to wear. In 2020, Facebook Reality Labs showed off a pair of concept VR glasses using holographic lenses , which looked like over-sized sunglasses. Building on that original concept, the company revealed Holocake 2 today (above), its thinnest VR headset yet. It looks more traditional than the original pair, but notably Zuckerberg says it’s a fully functional prototype that can play any VR game while tethered to a PC.

“Displays that match the full capacity of human vision are going to unlock some really important things,” Zuckerberg said in a media briefing. “The first is a realistic sense of presence, and that’s the feeling of being with someone or in some place as if you’re physically there. And given our focus on helping people connect, you can see why this is such a big deal.” He described testing photorealistic avatars in a mixed reality environment, where his VR companion looked like it was standing right beside him. While “presence” may seem like an esoteric term these days, it’s easier to understand once headsets can realistically connect you to remote friends, family and colleagues.

Meta’s upcoming Cambria headset appears to be a small step towards achieving true VR presence, the brief glimpses we’ve seen at its technology makes it seem like a small upgrade from the Oculus Quest 2. While admitting the perfect headset is far off, Zuckerberg showed off prototypes that demonstrated how much progress Meta’s Reality Labs has made so far.

Meta Reality Labs VR headset prototypes

Meta

There’s “Butterscotch” (above), which can display near retinal resolution, allowing you to read the bottom line of an eye test in VR. To achieve that, the Reality Labs engineers had to cut the Quest 2’s field of view in half, a compromise that definitely wouldn’t work in a finished product. The Starburst HDR prototype looks even wilder: It’s a bundle of wires, fans and other electronics that can produce up to 20,000 nits of brightness. That’s a huge leap from the Quest 2’s 100 nits, and it’s even leagues ahead of super-bright Mini-LED displays we’re seeing today. (My eyes are watering at the thought of putting that much light close to my face.) Starburst is too large and unwieldy to strap onto your head, so researchers have to peer into it like a pair of binoculars.

Meta Mirror Lake VR concept

Meta

While the Holocake 2 appears to be Meta’s most polished prototype yet, it doesn’t include all of the technology the company is currently testing. That’s the goal of the Mirror Lake concept (above), which will offer holographic lenses, HDR, mechanical varifocal lenses and eye tracking. There’s no working model yet, but it’s a decent glimpse at what Meta is aiming for several years down the road. It looks like a pair of high-tech ski goggles, and it’ll be powered by LCD displays with laser backlights. The company is also developing a way to show your eyes and facial expressions to outside observers with an external display on the front.

Repost: Original Source and Author Link

Categories
Computing

Lenovo’s latest ThinkStation is smaller than an Xbox

Usually, the faster the PC, the more hot it gets and the bigger it is. But what if you could just have the components worthy of the best desktops in a case that’s smaller than an Xbox Series X? That’s exactly what Lenovo is doing with its ThinkStation P360 Ultra, which clocks in at just under 4 liters of volume.

That’s almost 3 less liters than the Xbox Series X, and much, much smaller than the typical desktop, while also supporting up to a Core i9-12900K and an RTX A5000. With these specs, the P360 Ultra might be the fastest small form factor PC ever launched.

Lenovo

In order to cram all this hardware into such a tiny chassis, Lenovo worked with Intel and Nvidia to design the P360 Ultra from the ground up. Just like all other small form factor PCs, however, it wasn’t feasible to use the highest-end and highest power-consuming parts. The highest-end configuration does have a Core i9-12900K and an RTX A5000, but the 12900K is limited to 125 watts (down from the usual 241W) and the A5000 is actually the mobile version, which has significantly less cores and half the memory of the desktop RTX A5000 24GB.

But it’s important to keep in mind that this workstation is tiny and weighs just 4 pounds: You can literally pick it up with one hand. The hardware it has is also still very, very fast, even though it’s limited by power and thermal constraints, so the P360 Ultra should have no problem going head-to-head with everything except the fastest of high-end workstations (like Lenovo’s own ThinkStation P620).

The P360 Ultra is also quite robust in other categories: It supports up to 128GB of DDR5 4000MHz RAM (ECC and non-ECC options included), 8TB of NVMe storage, two Thunderbolt 4 ports, and 2.5 Gigabit Ethernet. One important thing to point out here, however, is that the NVMe SSDs only run at PCIe 3.0, which is half the speed of PCIe 4.0. Though 12th-generation CPUs do support PCIe 4.0 SSDs, this seems to be a limitation of the motherboard. The GPU on the other hand runs at PCIe 4.0, so no worries there.

A pair of headphones resting on the P360 Ultra.
Lenovo

The ThinkStation P360 Ultra will be available later this month and starts at $1,299 for a model that has a 12th-Gen Core i3. That’s obviously not a cheap price, but a small form factor always costs extra.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Tesla AI Day: what to expect from Elon Musk’s latest big announcement

It’s been nearly two years since Tesla’s first “Autonomy Day” event, at which CEO Elon Musk made numerous lofty predictions about the future of autonomous vehicles, including his infamous claim that the company would have “one million robotaxis on the road” by the end of 2020. And now it’s time for Part Deux.

This time, the event will be called “AI Day,” and according to Musk, the “sole goal” is to persuade experts in the field of robotics and artificial intelligence to come work at Tesla. The company is known for its high rate of turnover, the latest being Jerome Guillen, a key executive who worked at Tesla for 10 years before recently stepping down. Attracting and retaining talent, especially top tier names, has proven to be a challenge for the company.

The August 19th event is scheduled to start at 5PM PT / 8PM ET at Tesla’s headquarters in Palo Alto, California. According to an invitation obtained by Electrek, it will feature “a keynote by Elon, hardware and software demos from Tesla engineers, test rides in Model S Plaid, and more.” Much like Battery Day, the event will be livestreamed on Tesla’s website, giving investors and the media, as well as the company’s many fans, an up-close look at what’s under development.

Musk and other top officials at the company are expected to provide updates on the rollout of Tesla’s “Full Self-Driving” (FSD) beta version 9, which started reaching more customers this summer. We may also get details about Tesla’s “Dojo” supercomputer, the training of its neural network, and the production of its FSD computer chips. And there will also be “an inside look at what’s next for AI at Tesla beyond our vehicle fleet,” the invitation says.

Let’s start with what we know and work our way toward the speculation of what’s to come.

Tesla Gigafactory - Elon Musk

Photo by Patrick Pleul / picture alliance via Getty Images

FSD rollout

The big news out of Tesla’s first Autonomy Day was the introduction of the company’s first computer chip, a 260 square millimeter piece of silicon that Musk described as “the best chip in the world.” Originally, Musk had claimed that Tesla’s cars wouldn’t need any hardware updates, only software, on the road to full autonomy. Turns out that wasn’t exactly the case; they would need this new chip — two of them, actually — in order to eventually drive themselves.

A lot has happened between the 2019 event and now. Last month, Tesla began shipping over-the-air software updates for FSD beta v9, its long-awaited, definitely not autonomous, but certainly advanced driver assist system. That means that Tesla owners who have purchased the FSD option (which now costs $10,000) would finally be able to use many of Autopilot’s advanced driver-assist features on local, non-highway streets, including Navigate on Autopilot, Auto Lane Change, AutoPark, Summon, and Traffic Light and Stop Control.

The update doesn’t make Tesla’s cars fully autonomous, nor will it launch “a million self-driving cars” on the road, as Musk predicted. Tesla owners who have Full Self-Driving still need to pay attention to the road and keep their hands on the steering wheel. Some don’t, which can have tragic consequences.

Loved by fans, loathed by safety advocates, the FSD software has gotten Tesla in a lot of hot water recently. In recently publicized emails between Tesla and California’s Department of Motor Vehicles, the company’s director of Autopilot software made it clear that Musk’s comments (including his tweets) do not reflect the reality of what Tesla’s vehicles can actually do. And now Autopilot is under investigation by federal regulators who want to know why Teslas with Autopilot keep crashing into emergency vehicles.

Aside from the rollout of FSD beta v9, Tesla has also had to adjust to the global chip shortage. In a recent earnings call, Musk said that the company’s engineers had to rewrite some of their software in order to accommodate alternate computer chips. He also said that Tesla’s future growth will depend on a swift resolution to the global semiconductor shortage.

Tesla relies on chips to power everything from its airbags to the modules that control the vehicles’ seatbelts. It’s not clear whether the FSD chips, which are produced by Samsung, are being impacted by the shortage. Musk and his cohort may provide some insight into that during this week’s event.

Credit: Tesla

Dojo

Outside the car, Tesla uses a powerful supercomputer to train the AI software that then gets fed to its customers via over-the-air software updates. In 2019, Musk teased this “super powerful training computer,” which he referred to as “Dojo.”

“Tesla is developing a [neural net] training computer called Dojo to process truly vast amounts of video data,” he later tweeted. “It’s a beast!”

He also hinted at Dojo’s computing power, claiming it was capable of an exaFLOP, or one quintillion (​​1018) floating-point operations per second. That is an incredible amount of power. “To match what a one exaFLOP computer system can do in just one second,” NetworkWorld wrote last year, “you’d have to perform one calculation every second for 31,688,765,000 years.”

By way of comparison, chipmaker AMD and computer builder Cray are currently working with the US Department of Energy on the design of the world’s fastest supercomputer, with 1.5 exaFLOPs of processing power. Dubbed Frontier, AMD says the supercomputer will have as much processing power as the next 160 fastest supercomputers combined.

When completed, Dojo is expected to be among the most powerful supercomputers on the planet. But rather than performing advanced calculations in areas like nuclear and climate research, Tesla’s supercomputer is running a neural net for the purposes of training its AI software to power self-driving cars. Ultimately, Musk has said Tesla will make Dojo available to other companies that want to use it to train their neural networks.

Earlier this year, Andrej Karpathy, Tesla’s head of AI, gave a presentation at the 2021 Conference on Computer Vision and Pattern Recognition, during which he offered more details about Dojo and its neural network.

“For us, computer vision is the bread and butter of what we do and what enables Autopilot,” Karpathy said, according to Electrek. “And for that to work really well, we need to master the data from the fleet, and train massive neural nets and experiment a lot. So we invested a lot into the compute.”

Other robots?

Earlier this month, Dennis Hong, founder of the Robotics and Mechanisms Laboratory at UCLA, tweeted a photo of a computer chip that many speculate is the in-house hardware used by Tesla’s Dojo.

But Hong is an interesting figure for other reasons, too. He specializes in humanoid robots and was a participant in the DARPA Urban Challenge which kicked off the race for self-driving cars. (His team placed third.)

Asked on Twitter whether his lab was working with Tesla, Hong posted some playful emojis but otherwise declined comment. We may learn more about how Hong’s work and Tesla’s pursuits intersect during AI Day.

Musk has been forthcoming about his desires for Tesla to become more than just a car company. “I think long term, people will think of Tesla as much as an AI robotics company as we are a car company or an energy company,” he said earlier this year.

US-POLITICS-ECONOMY-INFRASTRUCTURE

Photo by Andrew Caballero-Reynolds / AFP via Getty Images

The future

A warning for anyone tuning in to the AI Day livestream: take Musk’s predictions about near-term accomplishments with a massive grain of salt. The things that will be discussed during this event are unlikely to have any measurable impact on the company’s business in the months to come.

Self-driving cars are an incredibly difficult challenge. Even companies like Waymo that are perceived to have the best autonomous vehicle technology are still struggling to get it right. Tesla is no different.

“A key question for investors will be what the latest timeline is for achieving full autonomy,” Loup Funds managing partner Gene Munster said in a note. “Despite Elon’s ambitious goal of the end of this year, our best guess is that 2025 will be the first year of public availability of level 4 autonomy.”

The rest of 2021 is already jam packed for Tesla. The company needs to open factories in Texas and Germany. And it needs to tool up production for its hotly anticipated Cybertruck, which has been delayed until 2022. Full autonomy, such as it is, can wait.



Repost: Original Source and Author Link

Categories
Security

Firefox’s latest security feature is designed to protect itself from buggy code

Firefox 95, the latest version of Mozilla’s browser that’s rolling out starting today, introduces a new security feature that’s designed to limit the damage that bugs and security vulnerabilities in its code can cause, Mozilla announced today. The feature, called RLBox, was developed with help from researchers at the University of California San Diego and the University of Texas, and it was originally released as a prototype last year. It’s coming to both the desktop and mobile versions of Firefox.

At its core, RLBox is a sandboxing technology, which means that it’s effectively able to isolate code so that any security vulnerabilities it might contain can’t harm the overall system. Sandboxing is a widely used security method across the industry, and browsers already run web content in sandboxed processes to try to stop malicious or buggy sites from compromising the overall browser.

RLBox differs from this traditional approach, however, and doesn’t have the same costs to performance and memory usage. This makes it possible to sandbox critical browser subcomponents like its spell checker, effectively allowing it to treat them as untrusted code while still running in the same process. This places limits on how code can run or which memory it can access.

As of today’s release, Firefox is isolating five modules: its Graphite font rendering engine, Hunspell spell checker, Ogg multimedia container format, Expat XML parser, and Woff2 web font compression format. Mozilla says this means if bugs or vulnerabilities are discovered in one of these subcomponents, the Firefox team won’t need to scramble to stop them from compromising the entire browser. “Even a zero-day vulnerability in any of them should pose no threat to Firefox,” Mozilla says.

Mozilla admits that it’s not a catch-all solution and that the approach won’t work everywhere, such as particularly performance-sensitive browser components. But the developer says it hopes to see other browsers and software projects implement the technology and that it intends to use it with more of Firefox’s components in the future. Mozilla has also updated its bug bounty program and will now pay researchers if they’re able to bypass the new sandboxes.

Repost: Original Source and Author Link

Categories
Game

Epic’s latest Fortnite teaser all but confirms entirely new Chapter 3 map

Epic Games has kicked off Black Friday by dropping a new Fortnite Chapter 3 teaser, one that all but confirms players will, indeed, drop onto an entirely new map once the current season ends. Rumors about the new map have been circulating for a while, but the first real indication came from Epic itself with the Chapter 3 announcement.

Epic Games/YouTube

Following rumors about a big change, Epic confirmed that Fortnite‘s current season will be the last in the game’s second chapter, meaning next month will bring the big Chapter 3 update. The announcement was made in a teaser trailer, which the company followed up with another tweet today.

To properly understand the new tweet, you should first check out Epic’s teaser trailer: it ends with the name for this finale, “The End.” Players noted that when flipped upside down, “The End” shows what appears to be Steamy Stacks and a large mountain or volcano.

Image: Fortnite/Epic Games

The decision to place the landscape silhouette underside down doesn’t seem to be a mere style choice. Fans have been speculating that Chapter 3 will essentially “flip” the battle royale island, bringing the fight to the other side. Assuming that does take place, it raises new questions.

Will the map get a full overhaul, or will it be a mirrored version of the current map with smaller changes throughout? Given complaints from players that the current chapter is starting to feel stale, it seems reasonable that Epic would overhaul the island.

The company seemingly reinforces that speculation with its new tweet, indicating that once Chapter 3 arrives, Fortnite players will get an entirely new experience akin to when they first played Fortnite. The big change will be ushered in by the Chapter 2 – Season 8 finale scheduled for December 4 at 1 PM PT / 4 PM ET.

Popular Fortnite data-miner and leaker HYPEX claims in a recent tweet that the Season 8 finale will result in another “black hole,” which is a placeholder that persists for a few days while Epic updates Fortnite with major changes. Should the leak prove true, the black hole will disappear and Chapter 3 will arrive on December 7.

HYPEX goes on to claim that the sources who provided this information have “never” been wrong in the past. The account also indicates that it knows more about Chapter 3 than it has revealed, saying the next chapter is “SOO good.” Epic says Chapter 2 will end with players battling the Cube Queen and whatever she has planned in a one-time in-game event on December 4.



Repost: Original Source and Author Link

Categories
Game

Activision Blizzard’s latest anti-harassment effort is a ‘responsibility committee’

Activision Blizzard is facing increasing scrutiny from the government and the games industry over its handling of the ongoing sexual harassment scandal, and its latest effort might not help. As Kotaku reports, the developer has formed a “Workplace Responsibility Committee” to help it implement new anti-harassment and anti-discrimination efforts. While that sounds useful at first, there’s a concern the initial committee is more symbolic than functional.

The committee will launch with just two members, both of whom (chair Dawn Ostroff and Reveta Bowers) are existing independent board members. They, in turn, will report to the board and key Activision Blizzard executives — including CEO Bobby Kotick, who some argue is partly to blame for the scandal. The duo will work with an outside coordinator and a consultant following the company’s settlement with the EEOC, but there’s no mention of involving regular company staff or outsiders who weren’t part of that court agreement.

As such, it won’t be surprising if the committee does little to satisfy critics. Employees and others have called on Kotick to resign, among other more substantial changes. There’s also low confidence in leadership’s ability to police itself — Jennifer Oneal, Blizzard’s first female leader, allegedly left her position feeling she was the target of discrimination by a seemingly irredeemable company culture. Bloomberg noted that some board members (including Ostroff) are Kotick’s longtime friends and connections, for that matter. The committee might need to take aggressive steps if it wants to prove it’s more than a superficial gesture.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Security

Over a million GoDaddy WordPress customers had email addresses exposed in latest breach

GoDaddy has suffered a security breach that gave an attacker access to more than 1 million email addresses belonging to the company’s active and inactive Managed WordPress users, according to a disclosure it filed with the SEC on Monday.

The company says the attacker gained access to a provisioning system (meant to set up and automatically configure new sites when customers create them) in early September by “using a compromised password.” GoDaddy says that it noticed the intrusion on November 17th and immediately locked the attacker out before beginning an investigation and contacting law enforcement.

The hackers had access to more than just the email addresses — they could also see the original WordPress admin passwords set by the provisioner, as well as the credentials for active users’ databases and sFTP systems. The company also says that some customers had their private SSL keys exposed, which are responsible for proving that a website is who it says it is (powering the little lock icon you often see in your browser’s address bar).

According to GoDaddy, it’s working to mitigate the issues by resetting affected passwords and regenerating security certificates if needed. The company also says that it’s “contacting all impacted customers directly with specific details.” While those seem like appropriate steps, having to deal with a reset password will probably be a nuisance for some of its users.

GoDaddy didn’t immediately respond to a request for comment about how the attacker gained access to the password the company says was used to gain access to its systems. Its announcement does say, however, that its investigation is ongoing.

In recent intrusions at other companies, phishing or social engineering has been to blame (though there have also been instances of simply poor password security). GoDaddy itself has some pretty upsetting history with testing its employees’ cybersecurity awareness when it comes to fake emails, but attackers really only need to get lucky once to access treasure troves of data.

Repost: Original Source and Author Link

Categories
AI

Nvidia’s latest AI tech translates text into landscape images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don’t exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings.

“Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images,” Isha Salian, a member of Nvidia’s corporate communications team, wrote in a blog post. “Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky.”

Generated images from text

GauGAN2, whose namesake is post-Impressionist painter Paul Gauguin, improves upon Nvidia’s GauGAN system from 2019, which was trained on more than a million public Flickr images. Like GauGAN, GauGAN2 has an understanding of the relationships among objects like snow, trees, water, flowers, bushes, hills, and mountains, such as the fact that the type of precipitation changes depending on the season.

GauGAN and GauGAN2 are a type of system known as a generative adversarial network (GAN), which consists of a generator and discriminator. The generator takes samples — e.g., images paired with text — and predicts which data (words) correspond to other data (elements of a landscape picture). The generator is trained by trying to fool the discriminator, which assesses whether the predictions seem realistic. While the GAN’s transitions are initially poor in quality, they improve with the feedback of the discriminator.

Unlike GauGAN, GauGAN2 — which was trained on 10 million images — can translate natural language descriptions into landscape images. Typing a phrase like “sunset at a beach” generates the scene, while adding adjectives like “sunset at a rocky beach” or swapping “sunset” to “afternoon” or “rainy day” instantly modifies the picture.

GauGAN2

With GauGAN2, users can generate a segmentation map — a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like “sky,” “tree,” “rock,” and “river” and allowing the tool’s paintbrush to incorporate the doodles into images.

AI-driven brainstorming

GauGAN2 isn’t unlike OpenAI’s DALL-E, which can similarly generate images to match a text prompt. Systems like GauGAN2 and DALL-E are essentially visual idea generators, with potential applications in film, software, video games, product, fashion, and interior design.

Nvidia claims that the first version of GauGAN has already been used to create concept art for films and video games. As with it, Nvidia plans to make the code for GauGAN2 available on GitHub alongside an interactive demo on Playground, the web hub for Nvidia’s AI and deep learning research.

One shortcoming of generative models like GauGAN2 is the potential for bias. In the case of DALL-E, OpenAI used a special model — CLIP — to improve image quality by surfacing the top samples among the hundreds per prompt generated by DALL-E. But a study found that CLIP misclassified photos of Black individuals at a higher rate and associated women with stereotypical occupations like “nanny” and “housekeeper.”

GauGAN2

In its press materials, Nvidia declined to say how — or whether — it audited GauGAN2 for bias. “The model has over 100 million parameters and took under a month to train, with training images from a proprietary dataset of landscape images. This particular model is solely focused on landscapes, and we audited to ensure no people were in the training images … GauGAN2 is just a research demo,” an Nvidia spokesperson explained via email.

GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. In September 2018, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. That same year, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos.

GauGAN’s initial debut preceded GAN Paint Studio, a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, creating images and storyboards from natural language captions, and animating and syncing facial movements with audio clips containing human speech.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

The latest version of NVIDIA’s DLSS technology is better at rendering moving objects

NVIDIA has released a major update for its technology. With of the software, the company says the AI algorithm makes smarter use of motion vectors to improve how objects look when they’re moving. The update also helps to reduce ghosting, make particle effects look clearer and improve temporal stability. The latter has traditionally been one of the weakest aspects of the technology, so DLSS 2.3 represents a major improvement. As of today, 16 games feature support for DLSS 2.3. Highlights include Cyberpunk 2077, Deathloop and Doom Eternal.

If you don’t own an but still want to take advantage of the performance boost you can get from upscaling a game, NVIDIA has updated its Image Scaling technology to improve both fidelity and performance. Accessible through the NVIDIA Control Panel, the tool uses spatial upscaling to do the job. That means the result isn’t as clean as the temporal method DLSS uses, but the advantage is you don’t need special hardware. To that end, NVIDIA is releasing an SDK that will allow any GPU, regardless of make, to take advantage of the technology. In that way, NVIDIA says game developers can offer the best of both worlds: DLSS for the best possible image quality and NVIDIA Image Scaling for cross-platform support.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Latest New World update has good news for broke players, bad news for gold sellers

Overnight, New World received its weekly update, and there were some big changes contained in this one. While we received the usual round of bug fixes, Amazon also implemented some changes concerning the gold cost of attribute respecs and the Azoth cost of weapon skill tree respecs, meaning that players should be able to change their builds much more frequently. In addition, Amazon detailed some measures it’s implementing to combat all of the gold sellers players have undoubtedly seen in chat.

New World update 1.0.5 changes

In its patch notes for update 1.0.5, Amazon dove right into a hot button topic among the New World player base: coin sellers. It seems that no matter the MMO, there will always be accounts looking to sell currency, and in New World it’s not any different. Amazon starts off by saying that it has banned and suspended many of the gold-selling accounts that players have been reporting.

The company will now also require that new accounts hit player level 10 before they can participate in player-to-player trades or make currency transfers. Accounts will also have to be older than 72 hours to do both of those things. In addition, Amazon has moved some gold rewards from early main story quests to later quests to avoid giving sellers an early way to grind out gold, and the Trading Post will no longer be accessible before new players accept the “Introduction to the Trading Post” quest.

While we wait to see if those changes have any noticeable effect on the number of gold sellers spamming the chat, this week’s patch also fixed a number of issues. First up is a fix for an item duplication bug that affected storage sheds and crafting stations, along with a fix for the gold duplication bug that sprouted up after Amazon disabled wealth transfers earlier this week.

In a rather huge change, Amazon announced that it has reduced attribute respec costs by 60%, while the Azoth cost for weapon respecs has been decreased by 75%. This is exciting news for players who like to switch builds frequently or, like the author of this article, can’t seem to decide on a weapon combo and stick with it for any length of time. The coin cost of chisels has also been reduced by up to 50%, though that depends on the tier. Sadly, players can no longer equip two of the same type weapon, so if you want to put both weapons slots to use, you’ll have to pick two distinct weapon types for them.

Amazon has also fixed issues affecting spell strike consistency, the resilient item perk (which reduced all damage instead of just critical hit damage), and group scaling passives for the Great Axe, Hatchet, and Warhammer. We’re also getting a number of fixes for Outpost Rush and a slate of general bug fixes, which you can read about over in the full patch notes.

Amazon talks present and future changes for New World

In addition to sending this patch live, Amazon also published a lengthy developer blog post in which it discussed many features, covered this recent patch, and talked about future changes that are on the way. The blog post can be found on the New World forums and is worth reading from beginning to end for anyone currently playing, but there are some things that stick out to us.

First, Amazon’s blog post includes a lengthy section on the economy and gold deflation, which has been a big concern for players. In short, Amazon says that the economy is “performing within acceptable levels,” but does acknowledge that the gold generated at level 60 – New World‘s level cap – does become more “narrow” than it is at earlier levels. Amazon says that the recent fix for tier IV and V Azoth Staffs and the reintroduction of Outpost Rush should help late-game players earn more gold, as those were intended to be end-game gold grinds.

Amazon also revealed a whopper of a change by confirming that all Trading Posts in the world will be linked, meaning you can buy any product from any Trading Post. Transaction taxes from the settlement you’re buying from will still apply, and expired items will be returned to the settlement where they were listed. This is a big change, and it should normalize prices while ensuring that you don’t need to travel to a central settlement just to buy things. In addition, Amazon will also be ramping up the coins gained from Expedition bosses, so expect more gold from your Expedition runs.

There’s a lot more contained within New World‘s latest dev blog, but these are among the biggest changes. Some of these changes are already in the game as of the most recent patch, while others will be coming with the month’s major release later on in November.

Repost: Original Source and Author Link