Categories
AI

Artificial intelligence vs. neurophysiology: Why the difference matters

All the sessions from Transform 2021 are available on-demand now. Watch now.


On the temple of Apollo in Delphi (Greece), it was written: “Cognosce te Ipsum” (Know thy self). It is important to remember these words for everyone who wants to create artificial intelligence.

I continue my series of articles about the nature of human intelligence and the future of artificial intelligence systems. This article is a continuation of the article titled “Symbiosis Instead of Evolution — A New Idea about the Nature of Human Intelligence.”

In the previous article, after analyzing the minimum response time to a simple incoming signal, we found that the human brain with a high degree of probability may turn out to be a binary system, consisting of two functional schemes of response to excitation: reflex and intellectual.

In this article, we will talk about the first, the reflex part. Together, we will try to find out how similar a reflex response scheme really is to an algorithm and how this might affect the future of artificial intelligence.

Similar does not mean exactly the same

What is the difference?

In popular science films, a nerve impulse is presented as a kind of signal that travels through nerve cells like wires. We perceive this as a biological analogy for an electrical impulse.

In fact, this is not at all the case. A nerve impulse is a sharp movement of sodium and potassium ions across the outer membrane of a neuron using potential-dependent ion channels. This process can be compared to the successive falling of a track of cards or dominoes. After each nerve impulse, the neuron must return the ions back to their original positions. In our example, this is how to build a track from cards or dominoes again.

Nerve impulse is hard work. It is important that, in its deep physical essence, a nerve impulse is rather a mechanical work, and not an electrical signal at all, as many think.

This severely limits the rate of signal transmission in biological tissues. The signal travels along non-myelinated small-diameter fibers at a speed of only one meter per second. It is like a slow walk. For larger myelinated fibers, the speed increases to 45-60 kilometers per hour. And only for some large fibers with a thick myelin sheath and special interceptions of Ranvier, the speed reaches 200-300 km per hour.

On average, nerve impulses in our nervous system move 3 million times slower than electrical signals in computer systems. The nerve impulse, in addition to being slow, also makes constant stops at synapses, the junction points of neurons. To continue the path, the signal needs to pass through the synaptic cleft, the junction point of neurons. We can say that the nerve impulse is a rather slow journey with transfers.

All this suggests that the nerve impulse itself is already the result of serious effort, which simply must arrive somewhere at the end of the path.

The computer algorithm is of a completely different nature

deep neural network AI

The algorithms that work in computers are powered by sequences of voltage drops or machine code, consisting of conventional ones and zeros.

In addition to speed and physical essence, there is a long series of important differences between reflex and algorithm. A nerve impulse or reflex is an inevitable response, and an algorithm is a sum of rules, or a set of instructions designed to solve a specific problem.

In other words, the reflex can be wrong, but cannot be silent, and the algorithm, on the contrary, as a rule, does not make mistakes, but may not give an answer if the instruction contained in it cannot be executed.

The reflex knows the answer even before the task, and the algorithm learns the answer only after completing all the necessary steps.

A simple example

Imagine a simple problem to find the value of X in the formula 1 + X + 3 = 6. The algorithm will do this: first 6-1 = 5, then 5-3 = 2, so X = 2. The reflex will immediately answer X = 2. True, this will happen only if the reflex has already encountered such a situation and has empirically found out that answers 1 and 3 are incorrect.

But what if the situation changes and the question becomes more difficult 1 + X + Y = 6. With such a question, the algorithm will remain silent and will not be able to give an answer. Indeed, there is not enough initial data to answer. This option has several correct answers. The algorithm will not be able to figure out which one is accurate.

And for the reflex, nothing has changed, reflex will just answer X = 2, and Y = 3 if reflex has already met such a task before. If not, then reflex will still answer, but most likely with an error.

Why is it like this?

The answer lies in the energy cost of the nerve impulse process. Signal movement in the human nervous system, a very energy-intensive process for which it is first necessary to create a membrane potential (up to 90 mV) on the surface of a neuron, and then sharply shift it, thereby generating a wave of depolarization. At the moment of a nerve impulse, ions quickly move through the membrane, after which the nerve cell must return sodium and potassium ions to their original positions. For this, special molecular pumps (sodium-potassium adenosine triphosphatases) must work.

As a result, the nervous tissue turns out to be the most energy-consuming structure of our body. The human brain weighs on average 1.4 kilograms, or 2% of body weight, and consumes about 20% of all energy available to our body. In some children 4-6 years of age, the energy consumption of the brain reaches 60% of the energy available to the body!

All this forces nature to save the resources of the nervous system as much as possible.

To solve one single simple functional task, the nervous system needs about 100 compactly located neurons. Sea anemones (a class of coral polyps) have such a simple nervous system (100 neurons), which can reproduce (repeat) the original orientation of the body if they are transferred from one place to another.

More difficult tasks, more neurons

neurons

Additional tasks and functions require an increase in the power of the nervous system, which inevitably leads to an increase in the grouping of involved neurons. As a result, hundreds and thousands of voracious nerve cells are needed.

But nature knows how to find solutions even when it seems nothing can be invented. If the work of the nervous system is so expensive, then it is not necessary to obtain the correct answer at such a high price.

It is just cheaper to be wrong.

On the other hand, a mistake is worthless to nature. If the organism is often mistaken, it simply dies, and the one who gives the correct answers takes its place. Even if such answers are the result of a fluke. Figuratively speaking, everything is simple in nature — only those who have given the correct answer live.

This suggests that the work of the nervous system is only superficially similar to the algorithm. In fact, there is no computation at the heart of its work, but a reflex or simple repetition of stereotyped decisions based on memory.

The nature and nervous system of any living organism on our planet simply juggle with pre-written cribs in the form of various memory mechanisms, but outwardly it looks like a computational activity.

In other words, trying to beat the reflex with a computational algorithm is like trying to play fair against a card sharper.

This tactic, combined with synaptic plasticity, gives the biological nervous system tremendous efficiency.

In living nature, the brain is an extremely expensive commodity. Therefore, its operation is based on a simple but cheap reflex, and not an accurate but expensive algorithm. With this method, a small number of neurons solve very complex problems associated, for example, with orientation. The secret is that the biological nervous system does not actually calculate anything, it just remembers the correct answer. Over billions of years of evolution and a period of one’s own life, a universal set of solutions has been created that were previously successful. And if not, then it is not scary to be wrong. This allows even small and primitive nervous systems to simultaneously respond to stimuli and maintain automatic functions such as muscle tone, breathing, digestion, and blood circulation.

Algorithms lose before the competition starts

brain vs integrated circuits

All this suggests that trying to create AI based on existing computational algorithms, we will fundamentally lose to nature, even in examples of simple non-intellectual activities associated, for example, with movement. Our electronic devices will be accurate, but very energy-intensive and, as a result, completely inefficient.

We can already see this in self-driving cars. One of the unexpected problems faced by developers of autonomous control systems is related to energy consumption. Experimental self-driving cars need special high-performance electric generators to power electronic control systems.

While in nature there are amazingly simple nervous systems that perfectly cope with the task of dynamic maneuvering. For example, nurse sharks (which weigh up to 110 kilograms and can attack humans), the brain weighs only 8 grams, and the entire nervous system, together with all the fibers of the peripheral section, a little more than 250 grams.

The main conclusion

The first thing we need to create real artificial intelligence is electronic systems that work on the principles of a biological reflex arc, i.e., biological algorithms with zero discreteness.

It is interesting that the structural block diagrams of biological algorithms existed at the end of the last century, but due to the obscure zero discreteness, they have remained exotic. The only exception was evolutionary algorithms, which formed the basis for evolutionary modeling in the field of computational intelligence.

Biology teaches us that in real life it is not the one who makes mistakes that loses, but the one who does not save resources.

There is no need to be afraid of mistakes. In fact, you need to be afraid of accurate answers paid for by high energy consumption.

But this is only part of the problem, the solution of which will make it possible to create relatively simple artificial systems capable of controlling movement and fine motor skills.

To develop real artificial intelligence applicable in real life, we will have to figure out how the second, the intellectual scheme of the human brain, works.

Dr. Oleksandr Kostikov is a medical doctor by education based in Canada. He is working on a new theoretical concept about the nature of intelligence that also aims to create a completely new and unusual type of artificial intelligence.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Android App Bundles are replacing APKs – why it matters

Google Play Store is constantly evolving to meet the growing needs and demands of Android users and developers. Many of those improvements rely on automated systems powered by AI and machine learning, particularly in screening apps for malware or prohibited content. There are times, however, when changes require developers to make changes in the way they write and distribute their apps. One of the most disruptive changes is coming in August when Google Play Store switches to App Bundles instead of APKs as its standard package format, a change that will affect not only developers but also Android users, hopefully for the better.

The What and Why of App Bundles

Short for Android Packages, APKs have long been Android’s standard package format for apps and games. More analogous to Java’s JAR archives (and is, in fact, an extension of it), APKs are designed to bundle everything that an app needs to be installed on a device, from code to assets like images and sounds, some of which would have different versions for different kinds and sizes of devices. As Android’s ecosystem grew, however, so did the number of things that needed to be packaged in an APK for it to even work.

APKs, however, didn’t scale well to Android’s growth and Google had to make workarounds for larger apps, particularly games that sometimes needed gigabytes of additional data. That workaround came in the form of OBBs that needed to be downloaded even before you could start playing a game or using the app. These are the problems that Android App Bundles are promised to solve, and while the changes should be transparent to users, they should still be very noticeable.

Android App Bundles, which may be shortened to AABs, will change the way Android apps are packaged and, more importantly, delivered. One of the most immediate differences is that there will no longer be the need for a single APK to contain everything for all kinds of Android devices, meaning package sizes should be smaller and download times should be faster. In fact, App Bundles require that apps should be no bigger than 150MB.

New ways to deliver the same things

For apps that need more than 150MB, App Bundles introduce a new feature to replace OBBs called Play Asset Delivery. Using better data compression and dynamic delivery strategies, this PAD system promises faster downloads for non-code assets as well, perhaps even while already playing the game. Future updates can also be smaller because PADs won’t contain all the new assets but only what changed between different versions of the assets, a.k.a. their deltas. Play Asset Delivery also comes with the benefit of security since the assets are stored in and downloaded from Google Play rather than some CDN hosting provided arranged by developers on their own.

Another new feature enabled by Android App Bundles not possible with APKs is the Play Feature Delivery. It extends the concept of App Bundles containing only the parts of the app that are needed on a particular device but focuses on features that are needed to actually start using the app as soon as possible. The idea is that it would allow users to immediately use the app just seconds after installing it, delaying the download of other parts of the app for later.

Android users shouldn’t need to do anything on their end to benefit from these changes, though app developers have to do the heavy lifting for their part. Fortunately for them, Google Play Store’s Android App Bundle requirement, which becomes effective in August, only applies to new apps submitted to the app store. Of course, developers can voluntarily also adopt App Bundles if they want to improve the experience for users.

The Catch: It’s Google’s World

This definitely sounds great, at least for users, but it does come with one subtle fine print. All of these features are available only on Google Play Store, which sounds like a no-brainer but has important implications for some Android developers. Unlike APKs, Android App Bundles cannot exist outside of Google Play and cannot be distributed outside of it. This means that developers switching from APK to App Bundles can no longer provide the exact same package or experience on other app sources unless they opt to maintain a separate APK version. This naturally puts third-party app stores at a disadvantage, but Google will most likely play up the Play Store’s security as a major reason to avoid those sources anyway.

Repost: Original Source and Author Link

Categories
Tech News

Android Apps coming to Windows 11: Why it matters

After much speculation and rumors all this year, Windows 11 operating system is now officially announced by Microsoft. The next refreshing OS is coming to machines this holiday season and the excitement is already on a high. A major part of the Windows 11 ecosystem is going to have presence of all supported Android apps, which comes as interesting news for users.

Yes, Android apps will be present on the Start menu bar, have their own dedicated icons on the taskbar and launch with desktop shortcuts too. Panos Panay, Chief Product Officer at Microsoft said that the Android apps will be installable without much hassle from the OS interface, and users will be able to enjoy their frequently used apps in the Windows 11 environment.

Of course, later this year when the operating system will officially be available for purchase or upgrade – from compatible Windows 10 systems – the next era of Microsoft’s long-lasting legacy will be marked.

What’s the perk?

This move will benefit the users as well as developers since the former won’t have to rely on the web version of an application while the latter will have the liberty to skip developing the app dedicatedly for the Windows only version.

As a user, you’ll have the freedom to multitask on Windows applications like Word while having apps like Kindle, Ring, Instagram or TikTok open in another window. The new Windows 11 Snap feature will come in handy to choose the layout of the multiple apps on the screen real estate. Total bliss for multitaskers who like to have their apps organized visually to their preference.

For those using a tablet or dual-screen gadget loaded with the next OS version, this will boost multitasking in interesting ways. Having a seamless ecosystem of all that you desire on your tablet or laptop is going to change the Windows dynamics in the coming years.

Amazon Appstore integration

The big surprise for Android apps to land on the Windows 11 ecosystem comes in the form of Amazon Appstore marketplace integration. Rather than partnering directly with Play Store to offer the Android apps, Microsoft is taking a detour. To make the whole experience seamless, these non-native apps will leverage the power of Intel Bridge technology to run on x86 processors.

In a press call, Microsoft has confirmed that these Android apps will not only work on Intel-powered systems but also on AMD-based systems as well. This makes Windows 11 perfect for touch-centric workflow apps since the domain of compatible systems is going to be huge.

These developments at this stage make Windows 11 a lot more inviting OS for mobile users too. The early builds of the operating system are coming as soon as next week via the Insider program, and it’ll be interesting to see what Microsoft is planning to ultimately present its users with.

Sideloading apps and Play Services

Whether the new operating system will have the ability to sideload apps is anybody’s guess right now, but it will be a good option, at least for the users. This will help fill the gaps for bringing apps to the ecosystem that will not be offered officially via the Amazon Appstore.

Another important consideration is the level of involvement of the Play Services since Amazon Appstore will be the kingpin for managing the apps here. By the look of things, we can count out any chance of Play Services coming in the mix. From developers’ perspective, it will be interesting to see how much Amazon is willing to lessen its revenue cut in the future.

Sure it has reduced the revenue share to 20-percent for smaller developers, but keep in mind, Google has reduced it for developers earning their first one million to 15-percent. To have the competitive edge, Amazon will have to roll down a little since it is the Android apps in question here, and more developers need to be attracted toward the platform.

Flood of apps destined for the OS

While Windows’ own store is nothing much to talk about, this move will flood the Windows landscape with content like never before. There are almost 1.85 million Android apps out there, so you can very well imagine the possibility in the near future.

The apps will have the option to be added to the centered taskbar or pinned or snapped to the different multitasking modes. For now, there are no mentions of the hardware or software level requirements for the apps to run on Windows 11, but they should be something developers will not have a hard time coping with.

The wrap-up

Those who have already used the Amazon Appstore and Google Play Store know the wide gap in the user experience between the two. So, expecting a seamless, near-perfect integration between the Windows and Android apps – just like the macOS and iOS – is a farfetched dream for now. However, that doesn’t mean Android apps experience on Windows 11 will not be worth looking out for.

We just hope, the Android apps don’t encounter the same irks as that on the Chrome OS. Common sense would make us believe that there will be no such bumps as Windows 11 is a vast operating environment as compared to Chrome OS, which is based on Google’s cloud services. Since, for now the Android apps are presumably an add-on feature and not something core to the operating system, Microsoft will have all ends plugged for good.

The move to collaborate with Amazon rather than Google makes perfect sense as Microsoft has to worry about Chromebooks which are hugely popular with the users. In the end, it will all come down to the ease of use of the Amazon Appstore, and how well Amazon can evolve for a PC/Android experience in this niche ecosystem.

Repost: Original Source and Author Link

Categories
Tech News

Why entrepreneurship in emerging markets matters

The Seedstars World Competition, the biggest global event for startups from emerging markets, is now live! Watch it here:

The tech entrepreneurship scene is largely dominated by Western startup stories, unicorns with unbelievable valuations and highly charismatic founders. What about the rest of the world? More than 85% of the world’s population lives in so-called developing countries, or what we refer to as emerging markets. Emerging because they carry deep historical burdens that have, in the past, prevented them from competing on a global level with international powers. But they now have immense potential to lead the future of our planet.

That’s why this year TNW partnered up with Seedstars, a fully-remote startup investor and accelerator that has been running the largest startup competition in emerging markets since 2013. After a startup selection process with over 5,000 applicants, 90+ local competitions, 10+ ecosystem networking events, 20 regional selection events, and five regional finals, there are only five startups remaining to compete for the title of Seedstars Global Winner.

The winning startup will be awarded a prize of $500K in equity investment and join the investment portfolio of Seedstars International, currently comprising 67 high growth ventures from over 30 emerging markets. 

“We are extremely grateful to our partners and global community who supported the realization of the competition this year in spite of all the challenges and physical limitations. This is the first time the tour takes a completely online format, which demonstrates an incredible eagerness from entrepreneurs to roll-up their sleeves and get down to compete with thousands of other entrepreneurs from around the world,” says Alisée de Tonnac, co-CEO and co-founder of Seedstars.

It’s this eagerness and willingness to create a positive impact in their societies that motivates these entrepreneurs to keep on working in spite of the difficulties encountered in their regions. It’s imperative that more attention, resources, and investments are dedicated to the next generations that have the power to shape our world’s future.

This year’s global finalists are: PEGASI from Venezuela, IMAN from Uzbekistan, Finology from Malaysia, Ladda from Nigeria, and Fulfillment Bridge from Tunisia. 

Follow the event and find out who will come out on top. Along with the competition, you’ll also get to see an expert panel discussion on how to keep up with an ever-changing labor market, and how we can help entrepreneurs and institutions upskill to prepare for the future of work. 

This event is brought to you by Seedstars, along with their partners the Canton of VAUD, HEG-FR: The School Of Management Fribourg, Mada, and Presence Switzerland.

Check out TNW’s write up on the finalists to learn more and cheer on your favorite startup! 

Repost: Original Source and Author Link

Categories
Computing

Here’s Why Moving Chromebooks to Android 11 Matters

On day two of Google’s I/O developer conference, a lot was revealed about the future of Chrome OS. A key part of those plans is Google updating the Android runtime on Chromebooks to Android 11 throughout this year.

It’s been known for some time that select Chromebook models would be getting Android 11, but Google has now officially confirmed the technical aspect of the plans — and a new switch in the underlying Android runtime layer inside of Chrome OS.

According to Google, on “capable devices,” Android will be moving away from running a container inside of Chrome OS and into a new secure virtual machine. The plans were discussed toward the ending of the “What’s new in Chrome OS” keynote, though Google didn’t provide a list of devices or a specific release date.

With the move away from having Android runtime in a container — which is a software package that contains everything the software needs to run — there should be several key benefits under the hood of Chromebooks. Android 11 experiences in ChromeOS should be more secure and stable, and there should also be some performance improvements, a bit similar to how Linux already runs on ChromeOS.

“As developers, you don’t need to worry about making any changes. This is just one way we’re investing to make sure that your apps and games are at their best on Chromebooks,” said Sanjay Nathwani, a product manager on the Chrome OS team.

More importantly, this move means the scope of Android updates on Chromebooks should be improved. In a separate Google I/O session, a Google engineer said this switch makes the underlying Android environment in Chrome OS “more maintainable than it was before” by “reducing its a divergence from mainline Android.”

Most Chromebooks today are running Android 9, and these changes might not be noticeable for users or developers. However, with Google noting that usage of Android apps on Chrome OS has tripled since this point 2020, there’s a lot of hope for performance gains.

Google also detailed new Android 12-inspired design languages coming to Chrome OS, and a new photos feature for the recently introduced Phone Hub feature.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Transformative tech coverage that matters

VentureBeat | Transformative tech coverage that matters

Sendbird API enables group voice and video calls for any app

Press Release

IBM details planned 2-nanometer chip process

Endpoint security platform Huntress raises $40M

How AI is helping Nvidia improve U.S. Postal Service delivery

Ecommerce subscriptions platform ReCharge raises $277M

Valence launches BONDS to mentor Black professionals

U.S. Senate committee to consider technology research spending bill

IoT is critical to enterprise digital transformation, Omdia says

Netskope: Lack of collaboration can block digital transformation

5 awesome jobs you need to see if you’re looking for a change

Sponsored Jobs

Wrike: Hybrid work requires collaboration and project management tools

Financial giant S&P taps Snowflake for better cloud data distribution

Google launches Agent Assist for Chat in preview

Speech recognition system trains on radio archive to learn Niger Congo languages

The gaming industry’s ‘duty of care’ in keeping players safe

VB Event

Open source time-series database operator Timescale raises $40M

Cisco integrates with Box to reduce friction in Webex collaborations

Exclusive

Press Release

Linux Foundation launches open source agriculture infrastructure project

Alkymi Patterns tool uses AI to extract data from documents

GlobalFoundries and PsiQuantum partner on full-scale quantum computer

Utmost, a Workday-native workforce management system, raises $21M

Analytics-as-a-service platform StarTree nabs $24M

Press Release

Predictive analytics startup Pecan.ai raises $35M to boost AI adoption

Chipmaker TSMC may be planning to build more chip factories in Arizona

Crowdsec leverages crowdsourcing to reinvent cybersecurity economics

Zammo unfurls conversational AI integration service

Microsoft open-sources Counterfit, an AI security risk assessment tool

Press Release

Microsoft launches Power BI Goals to help manage productivity

Cybersecurity compliance startup Vanta raises $50M

Press Release

Third-party ransomware risk is real, but Black Kite’s latest tool can help

HoneyBook boosts contractor payment, booking, invoicing with $155M

Press Release

Datanomix raises $6M to monitor factory operations

WorkBoard raises $75M to help companies track OKRs

Data backup company Acronis secures $250M to expand datacenter footprint


Repost: Original Source and Author Link

Categories
Computing

Nvidia RTX DLSS: What It Is and Why It Matters

When they were launched in 2018, Nvidia’s Turing generation of GPUs introduced some intriguing new features for gamers everywhere. Ray tracing is the easiest to wrap your head around, but deep learning supersampling, or DLSS, is a little more nebulous.

Even if it’s more complicated to understand, DLSS is one of Nvidia’s most important graphical features, offering higher frame rates and resolutions while requiring fewer GPU resources. To help you understand just how it works, here’s our guide to everything you need to know about Nvidia’s RTX DLSS technology, so you can decide whether it’s enough of a reason to upgrade to a new RTX 30 series GPU.

What is DLSS?

Deep learning super sampling uses artificial intelligence and machine learning to produce an image that looks like a higher-resolution image, without the rendering overhead. Nvidia’s algorithm learns from tens of thousands of rendered sequences of images that were created using a supercomputer. That trains the algorithm to be able to produce similarly beautiful images, but without requiring the graphics card to work as hard to do it.

DLSS also incorporates more traditional beautifying techniques like anti-aliasing to create an eventual image that looks like it was rendered at a much higher resolution and detail level, without sacrificing frame rate.

This is all possible thanks to Nvidia’s Tensor cores, which are only available in RTX GPUs (outside of data center solutions, such as the Nvidia A100). Although RTX 20 series GPUs have Tensor cores inside, the RTX 3070, 3080, and 3090 come with Nvidia’s second-generation Tensor cores, which offer greater per-core performance.

Where it originally launched with little competition, though, other sharpening techniques from both AMD and Nvidia itself now compete with DLSS for mindshare and effective utilization in 2020 — even if they don’t work in quite the same way.

What does DLSS actually do?

DLSS is the end result of an exhaustive process of teaching Nvidia’s A.I. algorithm to generate better-looking games. After rending the game at a lower resolution, DLSS infers information from its knowledge base of super-resolution image training, to generate an image that still looks like it was running at a higher resolution. The idea is to make games rendered at 1440p look like they’re running at 4K, or 1080p games to look like 1440p. DLSS 2.0 offers 4x resolution, allowing you to render games at 1080p while outputting them at 4K.

More traditional super-resolution techniques can lead to artifacts and bugs in the eventual picture, but DLSS is designed to work with those errors to generate an even better-looking image. It’s still being optimized, and Nvidia claims that DLSS will continue to improve over the months and years to come, but in the right circumstances, it can deliver substantial performance uplifts, without affecting the look and feel of a game.

Where early DLSS games like Final Fantasy XV delivered modest frame rate improvements of just five to 15 FPS, more recent releases have seen far greater improvements. With games like Deliver us the Moon, and Wolfenstein: Youngblood, Nvidia introduced a new A.I. engine for DLSS, which we’re told improves image quality, especially at lower resolutions like 1080p, and can increase frame rates in some cases by over 50%.

There are also new quality adjustment modes that DLSS users can make, picking between Performance, Balanced, and Quality, each focusing the RTX GPU’s Tensor core horsepower on a different aspect of DLSS.

How does DLSS work?

DLSS forces a game to render at a lower resolution (typically 1440p) and then uses its trained A.I. algorithm to infer what it would look like if it were rendered at a higher one (typically 4K). It does this by utilizing some anti-aliasing effects (likely Nvidia’s own TAA) and some automated sharpening. Visual artifacts that wouldn’t be present at higher resolutions are also ironed out and even used to infer the details that should be present in an image.

As Eurogamer explains, the A.I. algorithm is trained to look at certain games at extremely high resolutions (supposedly 64x supersampling) and is distilled down to something just a few megabytes in size, before being added to the latest Nvidia driver releases and made accessible to gamers all over the world. Originally, Nvidia had to go through this process on a game-by-game basis. Now, with DLSS 2.0, Nvidia provides a general solution, so the A.I. model no longer needs to be trained for each game.

In effect, DLSS is a real-time version of Nvidia’s screenshot-enhancing Ansel technology. It renders the image at a lower resolution to provide a performance boost, then applies various effects to deliver a relatively comparable overall effect to raising the resolution.

The end result can be a mixed bag but in general, it leads to higher frame rates without a substantial loss in visual fidelity. Nvidia claims frame rates can improve by as much as 75% in Remedy Entertainment’s Control when using both DLSS and ray tracing. It’s usually less pronounced than that, and not everyone is a fan of the eventual look of a DLSS game, but the option is certainly there for those who want to beautify their games without the cost of running at a higher resolution.

In Death Stranding, we saw significant improvements at 1440p over native rendering. Performance mode lost some of the finer details on the back package, particularly in the tape. Quality mode maintained most of the detail while smoothing out some of the rough edges of the native render. Our “DLSS off” screenshot shows the quality without any anti-aliasing. Although DLSS doesn’t maintain that level of quality, it’s very effective in combating aliasing while maintaining most of the detail.

We didn’t see any over-sharpening in Death Stranding, but that’s something you might encounter while using DLSS.

Better over time

Deep learning supersampling has the potential to give gamers who can’t quite reach comfortable frame rates at resolutions above 1080p the ability to do so with inference. DLSS could end up being the most impactful feature of Nvidia’s RTX GPUs moving forward. They aren’t as powerful as we might have hoped, and the ray-tracing effects are pretty but tend to have a sizable impact on performance, but DLSS could give us the best of both worlds: Better-looking games that perform better, too.

The best place for this kind of technology could be in lower-end cards, but, unfortunately, it’s only supported by RTX graphics cards, the weakest of which is the RTX 2060 — a $300 card. The new RTX 3000 GPUs offer a glimpse as to how Nvidia will use DLSS in the future: Pushing resolutions above 4K while maintaining stable frame rates.

Nvidia has shown the RTX 3090, a $1,500 GPU with 24GB of memory, rendering games like Wolfenstein: YoungBlood at 8K with ray tracing and DLSS turned on. Although wide adoption of 8K is still a ways off, 4K displays are becoming increasingly common. Instead of rendering at native 4K and hoping to stick around 50-60 FPS, gamers can render at 1080p or 1440p and use DLSS to fill in the missing information. The result is higher frame rates without a noticeable loss in image quality.

Throughout the future, DLSS will only continue to improve because it operates via a neutral network. To our benefit, the original DLSS had far more artifacts than the current DLSS 2.0. That means that games like Death Stranding will have a clearer picture compared to other image-reconstruction tools— like checkerboard rendering. While DLSS makes games look amazing, it does matter if the games are compatible with DLSS technology or not. 

Only 15 games currently support DLSS 2.0 technology, which is a lower number than games that support ray tracing. Fortunately, developers will likely adopt that tech for widespread-use soon. DLSS will soon be supported on upcoming game releases like Call of Duty: Black Ops Cold War and Cyberpunk 2077. With Ampere GPUs’ impending launch, developers may also seek ways to revamp system resources. While it hasn’t yet been widely adopted, DLSS is well-developed and will be easy for designers to integrate into their games.

DLSS is easy enough to enable, and it overpowers RTX GPUs; we could soon see it in many new games over the next couple of years. If this technology booms the way we at Digital Trends expect it to, AMD will have to integrate something similar just so they can stay relevant in the gaming market.

Editors’ Choice




Repost: Original Source and Author Link