Categories
Computing

The revolutionary PC gaming tech developers are ignoring

Variable Rate Shading, or VRS, is a major piece of graphics tech that PC games have largely ignored for the past three years. It works on all modern AMD and Nvidia graphics cards, and it has a simple goal: Improve performance by as much as 20% without any perceivable drop in image quality.

Sounds amazing, right? Well, there’s a reason you probably haven’t heard much about it. The last couple of years have focused on Nvidia’s Deep Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR) as the performance-saving champions of the modern graphics era. And although they offer the best bang for the game developer’s buck, VRS is an equally impressive tool that’s been woefully underused.

Variable Rate Shading: Not new

Microsoft / The Coalition

VRS isn’t new — Microsoft’s blog post announcing the feature in DirectX 12 is over three years old. If you’re not familiar, VRS changes the resolution at which shaders are applied within a scene. It’s not changing the resolution of the game; VRS simply allows neighboring pixels to share a shader rather than having the GPU do redundant work.

If there’s a corner of a scene wrapped in shadow without a lot of detail, for example, your graphics card doesn’t need to calculate the light, color, and texture values for each pixel. It can save some hassle by grouping them together — four pixels in a 2×2 grid may have extremely similar shading values, so VRS kicks in to optimize performance by only calculating one shader and applying it to the rest of the grid. The size of the grid is the shading rate, and more pixels in a grid means a lower shading rate.

That small change can make a big difference in performance. In Gears Tactics at 4K, for example, VRS offered a 22.9% increase in my average frame rate. That’s the best example, but Resident Evil Village also showed a 9.8% increase in my average frame rate, while Hitman 3 offered a solid 8% boost. And the idea behind VRS is that it should be indistinguishable when it’s turned on, essentially offering free performance.

VRS performance in three video games.

There are only a small number of games that support VRS on PC, despite it being more than three years old. I’ll address that issue later in the column, but the more pressing issue is how VRS is used among the few games that support it.

There are two buckets for VRS: One that makes it look like a revolutionary piece of kit that offers free performance, and another that makes it look like a feature that hurts more than it helps.

Two worlds of VRS

A debug screen for VRS in Dirt 5.
Codemasters

Microsoft has two tiers of VRS in DirectX 12 Ultimate: The aptly-named Tier 1 and Tier 2. Tier 1 VRS is the most common technique you’ll find in games, which is the heart of the problem. This level doesn’t concern itself with individual pixels, and it instead applies different shading rates to each draw call. When there’s a call to draw background assets, for example, they may have a 2×2 shading rate, while assets drawn in the foreground have a shading rate of 1×1.

Tier 2 VRS is what you want. This is far more granular, allowing the developer to shade within a draw call. That means one part of a model can have a shading rate of 2×2, for example, while a more detailed area on that same model could use 1×1. Tier 2 VRS is ideal, allowing the developer to focus on the details that matter to squeeze every ounce of performance out.

VRS comparison in Resident Evil Village.
Left: VRS Off, Right: VRS On

The problem: Even among the small pool of games that support VRS, most of them only use Tier 1. Resident Evil Village, the most recent game I looked at, uses Tier 1 VRS. You can see how that impacts the image quality above, where you can make out pixels in the snow as Tier 1 VRS lumps together everything a few feet away from the camera.

Contrast that with Gears Tactics, which supports Tier 2 VRS. There’s a minor difference in quality when zoomed in to nearly 200%, but it looks much nicer than Tier 1. You can spot a difference when the two are side-by-side and zoomed in, but put these two frames back to back in a blind test, and you wouldn’t be able to tell a difference. I certainly couldn’t.

VRS comparison in Gears Tactics.
Left: VRS Off, Right: VRS On

Free performance for virtually no loss in image quality is a huge deal, but on PC at least, VRS isn’t in the conversation as much as it should be (let alone the discussion between Tier 1 and Tier 2). Even after moving Gears Tactics and Gears 5 to Tier 2 VRS, developers haven’t jumped on the performance-saving train. Instead, VRS has mostly focused on the limited power budgets of consoles, and there’s one particular console holding the feature back.

A console blockade

A PS5 standing on a table, with purple lights around it.
Martin Katler/Unsplash

The reason VRS comes in two flavors is that Tier 2 requires specific hardware to work. Nvidia’s RTX graphics cards and AMD’s RX 6000 GPUs have hardware support, as does the Xbox Series X. Older graphics cards and the PlayStation 5 do not. Instead, they use a software-based version of Tier 1 VRS, if it’s even available in the game at all.

Developers working on multi-platform titles are usually going to focus on the lowest common denominator, which means Tier 1 VRS. There are only a few developers who have gone out of their way to support Tier 2 VRS on supported hardware (id Software uses Tier 2 VRS on Doom Eternal for the Xbox Series X, for example), but the vast majority of modern AAA games either don’t support VRS or use this Tier 1 approach.

As Gears Tactics shows, a proper Tier 2 implementation from the developer offers the best image quality and performance. It’s true that DLSS and FSR provide an easy solution for developers to improve performance in PC games. But proper Tier 2 VRS can represent around a 20% boost for barely any difference in image quality, and that’s too good to ignore.

This article is part of ReSpec – an ongoing biweekly column that includes discussions, advice, and in-depth reporting on the tech behind PC gaming.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Capcom is using Stadia tech for a web-based ‘Resident Evil Village’ demo

Starting today, you can stream a free demo of Resident Evil Village from Capcom’s website, with no need for a fancy gaming PC, Xbox or PlayStation. The demo is similar to one that’s available on other platforms, which allows players to explore parts of the village and castle. This appetizer for one of last year’s biggest-selling games is powered by Immersive Stream for Games, a version of Stadia tech that Google is licensing to others.

The demo will work on just about any computer, as well as iOS and Android phones and tablets, as long as the device can handle high-definition video and you have a sturdy enough internet connection (with a download speed of at least 10Mbps). It runs on Chrome on Windows, macOS and Android. On iOS, you can try it on Safari. The resolution tops out at 1080p and there’s no HDR mode.

PlayStation DualShock 4 and Xbox One controllers are officially supported, but other peripherals might work. Alternatively, you can use touch controls on mobile or a mouse and keyboard. 

Resident Evil Village touch controls

Capcom

As with Stadia’s click-to-play trials, there’s no need to register to play the demo. It’s worth noting that you’ll be disconnected after 10 minutes of inactivity. There’s no save function, so you’ll need to restart from the beginning if you disconnect. You can play as many times as you like and there’s no time limit, unlike previous versions of the demo.

You can play the demo on Capcom’s website if you are in the US, UK, Canada, France, Italy, Germany, Austria, Spain, Sweden, Switzerland, Denmark, Norway, Finland, Belgium, Ireland, the Netherlands, Poland, Portugal, Czech Republic, Slovakia, Romania or Hungary.

Capcom is Google’s second partner for Immersive Stream for Games. AT&T started offering its wireless customers free access to Batman: Arkham Asylum last October and Control: Ultimate Edition last month. Capcom seems more of a natural bedfellow, though.

Back in February, Insider reported that Google was looking to secure deals with Capcom, Peloton and others to build the licensing aspect of its game-streaming business. It was suggested that Capcom might use the tech to stream demos from its website, which turned out to be the case. This could even be a precursor to Capcom running its own game streaming storefront.

In other Resident Evil Village news, Capcom is bringing the game to Mac later this year. It’s also working on a version for the upcoming PlayStation VR2 headset.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Google will help open-source tech fight cyberattacks

At a time when cyberattacks happen with increasing frequency, Google announced a new security tool with the aim of increasing the safety of open-source software.

Assured Open Source Software (OSS) will enable users to incorporate Google’s own security packages into their own workflows.

Primakov/Shutterstock

Open-source software continues to be a popular target for security attacks, and as Google notes in its announcement, there has been a massive 650% year-over-year increase in the number of cyberattacks aimed at open-source suppliers. Seeing as software supply chains often utilize open-source code to remain accessible and easy to customize, they are especially vulnerable to these kinds of attacks.

Google is far from the only entity to address the fact that open-source software, despite its plentiful benefits, can be easily abused. The company, alongside OpenSSF and the Linux Foundation, is following up on the security initiatives brought up during the recent White House Summit on Open Source Security. Microsoft has also recently announced a new cybersecurity-based initiative.

There have been numerous high-profile cybersecurity vulnerabilities in the recent past, such as Log4j and Spring4shell. In an attempt to prevent such attacks from taking place, Google has now introduced Assured OSS.

As part of Assured OSS, Google hopes to enable users from both the enterprise sector and the public sector to work the Google OSS packages into their own developer workflows. On its own end, the company promises that the packages curated by the service will be regularly scanned, fuzz-tested, and analyzed to make sure that no vulnerabilities manage to slip past the defenses.

All the packages will be built with Google’s Cloud Build and will thus come with verifiable SLSA-compliance. SLSA stands for Supply-chain Levels for Software Artifacts and is a well-known framework that aims to standardize the security of software supply chains. Every package will also be verifiably signed by Google and will come with corresponding metadata incorporating Google’s Container/Artifact analysis data.

To further bring cybersecurity into focus, Google has also announced a new partnership with SNYK, an Israeli developer security platform. Assured OSS will be integrated into SNYK solutions from the get-go, allowing customers of both companies to benefit.

Google pointed out a staggering statistic: Within the 550 most common open-source projects that it regularly scans, it has managed to find more than 36,000 vulnerabilities as of January 2022. That alone shows how important it is to crack down on the vulnerability of these projects, seeing as open-source software is popular, needed, and definitely here to stay. Perhaps Google’s Assured OSS can make it more secure for everyone who benefits from it.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Report: Data and enterprise automation will drive tech and media spending to $2.5T

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


According to a new report released by Activate Consulting, the global technology and media spend will balloon to $2.5 trillion by 2025.  This analysis comes as 2021 netted a spend of more than $2 trillion.

The report indicates that one of the major drivers of this tech boom will be data solutions and enterprise automation.  According to the report, “Activate Technology and Media Outlook for 2022,” a set of new companies are paving the way for the future, delivering infrastructure, tools, and applications that will enable all enterprises to operate and innovate as if they were major technology companies.

Businesses and consumers can expect to see accelerated development of customer experiences, better (faster, less bureaucratic) employee experiences, improved intelligence and decision-making, and improved operational and financial efficiency as a result.  Technology like autonomy (self-driving cars, home automation), voice recognition, AR/VR, gaming and more will enable end-user experiences while enterprises will become more productive in their marketing effectiveness, IT service management, cross-planning and forecasting, and more.

New data startups are spurring the next era of innovation.  They’re focusing on leveraging data and information, improving end-user experience, and improving storage and connectivity — all of which will drive the business-to-business and business-to-consumer experiences of the future.

Event

The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2

Learn More

According to the report, more than 80% of the companies driving this innovation are U.S.-based, half of which are headquartered in the Bay Area.  They’re growing fast thanks to large venture capital infusions – and many of these startup companies have scaled at an unprecedented pace.  Fifteen of them have raised more than $1 billion since their launch.

In order for the next generation of companies to reach their full potential, the report indicates they must zero in on three specific areas of focus: strategy and transformation, go-to-market pricing, as well as their sales and marketing approach.

Read the full report by Activate Consulting.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

ML-driven tech is the next breakthrough for advances in biology

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This article was contributed by  Luis Voloch, cofounder and chief technology officer at Immunai

Digital biology is in the same stage (early, exciting, and transformative) of development as the internet was back in the 90s. At the time, the concept of IP addresses was new, and being “tech-savvy” meant you knew how to use the internet. Fast-forward three decades, and today we enjoy industrialized communication on the internet without having to know anything about how it works. The internet has a mature infrastructure that the entire world benefits from.

We need to bring similar industrialization to biology. Fully tapping into its potential will help us fight devastating diseases like cancer. A16z has rephrased its famous motto of “Software is eating the world” to “Biology is eating the world.” Biology is not just a science; it’s also becoming an engineering discipline. We are getting closer to being able to ‘program biology’ for diagnostic and treatment purposes.

Integrating advanced technology like machine learning into fields such as drug discovery will make it possible to accelerate the process of digitized biology. However, to get there, there are large challenges to overcome.

Digitized biology: Swimming in oceans of data

Not so long after gigabytes of biological data was considered a lot, we expect the biological data generated over the coming years to be counted in exabytes. Working with data at these scales is a massive challenge. To face this challenge, the industry has to develop and adopt modern data management and processing practices.

The biotech industry does not yet have a mature culture of data management. Results of experiments are gathered and stored in different locations, in a variety of messy formats. This is a significant obstacle to preparing the data for machine learning training and doing analyses quickly. It can take months to prepare digitized data and biological datasets for analysis.

Advancing biological data management practices will also require standards for describing digitized biology and biological data, similar to our standards for communication protocols.

Indexing datasets in central data stores and following data management practices that have become mainstream in the software industry will make it much easier to prepare and use datasets at the scale we collectively need. For this to happen, biopharma companies will need C-suite support and widespread cultural and operational changes.

Welcome to the world of simulation

It can cost millions of dollars to run a single biological experiment. Costs of this magnitude make it prohibitive to run experiments at the scale we would need, for example, to bring true personalization to healthcare — from drug discovery to treatment planning. The only way to address this challenge is to use simulation (in-silico experiments) to augment biological experiments. This means that we need to integrate machine learning (ML) workflows into biological research as a top priority.

With the artificial intelligence industry booming and with the development of computer chips designed specifically for machine learning workloads, we will soon be able to run millions of in-silico experiments in a matter of days for the same cost that a single live experiment takes to run over a period of months.

Of course, simulated experiments suffer from a lack of fidelity relative to biological experiments. One way to overcome this is to run the in-silico experiments in vitro or in vivo to get the most interesting results. Integrating in-silico data from vitro/vivo experiments leads to a feedback loop where results of in vitro/vivo experiments become training data for future predictions, leading to increased accuracies and reduced experimental costs in the long run. Several academic groups and companies are already using such approaches and have reduced costs by 50 times.

This approach of using machine learning models to select experiments and to consistently feed experimental data to ML training should become an industry standard.

Masters of the universe

As Steve Jobs once famously said, “The people who are crazy enough to think they can change the world are the ones who do.”

The last two decades have brought epic technological advancements in genome sequencing, software development, and machine learning. All these advancements are immediately applicable to the field of biology. All of us have the chance to participate and to create products that can significantly improve conditions for humanity as a whole.

Biology needs software engineers, more infrastructure engineers, and more machine learning engineers. Without their help, it will take decades to digitize biology. The main challenge is that biology as a domain is so complex that it intimidates people. In this sense, biology reminds me of computer science in the late 80s, where developers needed to know electrical engineering in order to develop software.

For anyone in the software industry, perhaps I can suggest a different way of viewing this complexity: Think of the complexity of biology as an opportunity rather than an insurmountable challenge. Computing and software have become powerful enough to switch us into an entire new gear of biological understanding. You are the first generation of programmers to have this opportunity. Grab it with both arms.

Bring your skills, your intelligence, and your expertise to biology. Help biologists to scale the capacity of technologies like CRISPR, single-cell genomics, immunology, and cell engineering. Help discover new treatments for cancer, Alzheimer’s, and so many other conditions against which we have been powerless for millennia. Until now.

Luis Voloch is cofounder and Chief Technology Officer at Immunai

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

LinkedIn and Intel tech leaders on the state of AI

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Disclosure: The author is the managing director of Connected Data World.

AI is on a roll. Adoption is increasing across the board, and organizations are already seeing tangible benefits. However, the definition of what AI is and what it can do is up for grabs, and the investment required to make it work isn’t always easy to justify. Despite AI’s newfound practicality, there’s still a long way to go.

Let’s take a tour through the past, present, and future of AI, and learn from leaders and innovators from LinkedIn, Intel Labs, and cutting-edge research institutes.

Connecting data with duct tape at LinkedIn

Mike Dillinger is the technical lead for Taxonomies and Ontologies at LinkedIn’s AI Division. He has a diverse background, ranging from academic research to consulting on translation technologies for Fortune 500 companies. For the last several years, he has been working with taxonomies at LinkedIn.

LinkedIn relies heavily on taxonomies. As the de facto social network for professionals, launching a skill-building platform is a central piece in its strategy. Following CEO Ryan Roslanski’s statement, LinkedIn Learning Hub was recently announced, powered by the LinkedIn Skills Graph, dubbed “the world’s most comprehensive skills taxonomy.”

The Skills Graph includes more than 36,000 skills, more than 14 million job postings, and the largest professional network with more than 740 million members. It empowers LinkedIn users with richer skill development insights, personalized content, and community-based learning.

For Dillinger, however, taxonomies may be overrated. In his upcoming keynote in Connected Data World 2021, Dillinger is expected to refer to taxonomies as the duct tape of connecting data. This alludes to Perl, the programming language that was often referred to as the duct tape of the internet.

“Duct tape is good because it’s flexible and easy to use, but it tends to hide problems rather than fix them,” Dillinger said.

A lot of effort goes into building taxonomies, making them correct and coherent, then getting sign-off from key stakeholders. But this is when problems start appearing.

Key stakeholders such as product managers, taxonomists, users, and managers take turns punching holes in what was carefully constructed. They point out issues of coverage, accuracy, scalability, and communication. And they’re all right from their own point of view, Dillinger concedes. So the question is — what gives?

Dillinger’s key thesis,, is that taxonomies are simply not very good as a tool for knowledge organization. That may sound surprising at first, but coming from someone like Dillinger, it carries significant weight.

Dillinger goes a long way to elaborate on the issues with taxonomies, but perhaps more interestingly, he also provides hints for a way to alleviate those issues:

“The good news is that we can do much better than taxonomies. In fact, we have to do much better. We’re building the foundations for a new generation of semantic technologies and artificial intelligence. We have to get it right,” says Dillinger.

Dillinger goes on to talk about more reliable building blocks than taxonomies for AI. He cites concept catalogs, concept models, explicit relation concepts, more realistic epistemological assumptions, and next-generation knowledge graphs.

It’s the next generation, Dillinger says, because today’s knowledge graphs do not always use concepts with explicit human-readable semantics. These have many advantages over taxonomies, and we need to work on people, processes, and tools levels to be able to get there.

Thrill-K: Rethinking higher machine cognition

The issue of knowledge organization is a central one for Gadi Singer as well. Singer is VP and director of Emergent AI at Intel Labs. With one technology after another, he has been pushing the leading edge of computing for the past four decades and has made key contributions to Intel’s computer architectures, hardware and software development, AI technologies, and more.

Singer said he believes that the last decade has been phenomenal for AI, mostly because of deep learning, but there’s a next wave that is coming: a “third wave” of AI that is more cognitive, has a better understanding of the world, and higher intelligence. This is going to come about through a combination of components:

“It’s going to have neural networks in it. It’s going to have symbolic representation and symbolic reasoning in it. And, of course, it’s going to be based on deep knowledge. And when we have it, the value that is provided to individuals and businesses will be redefined and much enhanced compared to even the great things that we can do today”, Singer says.

In his upcoming keynote for Connected Data World 2021, Singer will elaborate on Thrill-K, his architecture for rethinking knowledge layering and construction for higher machine cognition.

Singer distinguishes recognition, as in the type of pattern-matching operation using shallow data and deep compute at which neural networks excel, from cognition. Cognition, Singer argues, requires understanding the very deep structure of knowledge.

To be able to process even seemingly simple questions requires organizing an internal view of the world, comprehending the meaning of words in context, and reasoning on knowledge. And that’s precisely why even the more elaborate deep learning models we have currently, namely language models, are not a good match for deep knowledge.

Language models contain statistical information, factual knowledge, and even some common sense knowledge. However, they were never designed to serve as a tool for knowledge organization. Singer believes there are some basic limitations in language models that make them good, but not great for the task.

Singer said that what makes for a great knowledge model is the capability to scale well across five areas of capabilities: scalability, fidelity, adaptability, richness, and explainability. He adds that sometimes there’s so much information learned in language models, that we can extract it and enhance dedicated knowledge models.

To translate the principles of having a great knowledge model to an actual architecture that can support the next wave of AI, Singer proposes an architecture for knowledge and information organized at three levels, which he calls Thrill-K.

The first level is for the most immediate knowledge, which Singer calls the Giga scale, and believes should sit in a neural network.

The next level of knowledge is the deep knowledge base, such as a knowledge graph. This is where intelligible, structured, explicit knowledge is stored at the Terascale, available on demand for the neural network.

And, finally, there’s the world information and the world knowledge level, where data is stored at the Zetta scale.

Knowledge, Singer argues, is the basis for making reasoned intelligent decisions. It can adapt to new circumstances and new tasks. That’s because the data and the knowledge are not structured for a particular task, but it’s there with all their richness and expressivity.

It will take concerted effort to get there, and Intel Labs on its part is looking into aspects of NLP, multi-modality, common sense reasoning, and neuromorphic computing.

Systems that learn and reason

If knowledge organization is something that both Dillinger and Singer value as a key component in an overarching framework for AI, for Frank van Harmelen it’s the centerfold in his entire career. Van Harmelen leads the Knowledge Representation & Reasoning Group in the Computer Science Department of the VU University Amsterdam.

He is also Principal investigator of the Hybrid Intelligence Centre, a $22.7 million, (€20 million), ten-year collaboration between researchers at six Dutch universities into AI that collaborates with people instead of replacing them.

Van Harmelen notes that after the breakthroughs of machine learning (deep learning or otherwise) in the past decade, the shortcomings of machine learning are also becoming increasingly clear: unexplainable results, data hunger, and limited generalisability are all becoming bottlenecks.

In his upcoming keynote in Connected Data World 2021, Van Harmelen will look at how the combination with symbolic AIin the form of very large knowledge graphs can give us a way forward: Towards machine learning systems that can explain their results, that need less data, and that generalize better outside their training set.

The emphasis in modern AI is less on replacing people with AI systems, and more on AI systems that collaborate with people and support them. For Van Harmelen, however, it’s clear that current AI systems lack background knowledge, contextual knowledge, and the capability to explain themselves, which makes them not very human-centered:

“They can’t support people and they can’t be competent partners. So what’s holding AI back? Why are we in this situation? For a long time, AI researchers have locked themselves into one of two towers. In the case of AI, we could call these the symbolic AI tower and the statistical AI tower”.

If you’re in the statistical AI camp, you build your neural networks and machine learning programs. If you’re in the symbolic AI camp, you build knowledge bases and knowledge graphs and you do inference over them. Either way, you don’t need to talk to people in the other camp, because they’re wrong anyway.

What’s actually wrong, argues Van Harmelen, is this division. Our brains work in both ways, so there’s no reason why approximating them with AI should rely exclusively on either approach. In fact, those approaches complement each other very well in terms of strengths and weaknesses.

Symbolic AI, most famously knowledge graphs, is expensive to build and maintain as it requires manual effort. Statistical AI, most famously deep learning, requires lots of data, plus oftentimes also lots of effort. They both suffer from the “performance cliff” issue (, i.e. their performance drops under certain circumstances, but the circumstances and the way differ).

Van Harmelen provides many examples of practical ways in which symbolic and statistical AI can complement each other. Machine learning can help build and maintain knowledge graphs, and knowledge graphs can provide context to improve machine learning:

“It is no longer true that symbolic knowledge is expensive and we cannot obtain it all. Very large knowledge graphs are witness to the fact that this symbolic knowledge is very well available, so it is no longer necessary to learn what we already know.

We can inject what we already know into our machine learning systems, and by combining these two types of systems produce more robust, more efficient, and more explainable systems,” says Van Harmelen.

The pendulum has been swinging back and forth between symbolic and statistical AI for decades now. Perhaps it’s a good time for the two camps to reconcile and start a conversation. To build AI for the real world, we’ll have to connect more than data. We’ll also have to connect people and ideas.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nvidia’s latest AI tech translates text into landscape images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don’t exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings.

“Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images,” Isha Salian, a member of Nvidia’s corporate communications team, wrote in a blog post. “Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky.”

Generated images from text

GauGAN2, whose namesake is post-Impressionist painter Paul Gauguin, improves upon Nvidia’s GauGAN system from 2019, which was trained on more than a million public Flickr images. Like GauGAN, GauGAN2 has an understanding of the relationships among objects like snow, trees, water, flowers, bushes, hills, and mountains, such as the fact that the type of precipitation changes depending on the season.

GauGAN and GauGAN2 are a type of system known as a generative adversarial network (GAN), which consists of a generator and discriminator. The generator takes samples — e.g., images paired with text — and predicts which data (words) correspond to other data (elements of a landscape picture). The generator is trained by trying to fool the discriminator, which assesses whether the predictions seem realistic. While the GAN’s transitions are initially poor in quality, they improve with the feedback of the discriminator.

Unlike GauGAN, GauGAN2 — which was trained on 10 million images — can translate natural language descriptions into landscape images. Typing a phrase like “sunset at a beach” generates the scene, while adding adjectives like “sunset at a rocky beach” or swapping “sunset” to “afternoon” or “rainy day” instantly modifies the picture.

GauGAN2

With GauGAN2, users can generate a segmentation map — a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like “sky,” “tree,” “rock,” and “river” and allowing the tool’s paintbrush to incorporate the doodles into images.

AI-driven brainstorming

GauGAN2 isn’t unlike OpenAI’s DALL-E, which can similarly generate images to match a text prompt. Systems like GauGAN2 and DALL-E are essentially visual idea generators, with potential applications in film, software, video games, product, fashion, and interior design.

Nvidia claims that the first version of GauGAN has already been used to create concept art for films and video games. As with it, Nvidia plans to make the code for GauGAN2 available on GitHub alongside an interactive demo on Playground, the web hub for Nvidia’s AI and deep learning research.

One shortcoming of generative models like GauGAN2 is the potential for bias. In the case of DALL-E, OpenAI used a special model — CLIP — to improve image quality by surfacing the top samples among the hundreds per prompt generated by DALL-E. But a study found that CLIP misclassified photos of Black individuals at a higher rate and associated women with stereotypical occupations like “nanny” and “housekeeper.”

GauGAN2

In its press materials, Nvidia declined to say how — or whether — it audited GauGAN2 for bias. “The model has over 100 million parameters and took under a month to train, with training images from a proprietary dataset of landscape images. This particular model is solely focused on landscapes, and we audited to ensure no people were in the training images … GauGAN2 is just a research demo,” an Nvidia spokesperson explained via email.

GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. In September 2018, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. That same year, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos.

GauGAN’s initial debut preceded GAN Paint Studio, a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, creating images and storyboards from natural language captions, and animating and syncing facial movements with audio clips containing human speech.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

The Walmart Tech Gift Guide You Didn’t Know You Needed in Your Life

Digital Trends may earn a commission when you buy through links on our site.

If you’re looking to buy great tech this holiday season for your loved ones, Walmart has a bunch of fantastic offers right now. We’ve picked out some of the highlights including a fantastic coffee maker from Keurig, a robot vacuum that will save you plenty of time in your daily routine, as well as some of the best headphones out there, and much more. Whatever kind of tech your loved one is crazy about, Walmart has you covered. We’re here to tell you all about what’s available.

Keurig K-Duo Essentials Coffee Maker — $79, was $99

This Keurig K-Duo Essentials Coffee Maker is a great way of making every morning better. It offers the best of both worlds, making it possible to use either K-Cup pods or ground coffee to make a delicious cup of joe. Ideal for your loved one that is crazy about getting the perfect cup of coffee every time. That’s made easy here with plenty of choice for brew size, as well as other features like the ability to automatically pause mid-brew, as well as brew between an 8-, 10- or 12-ounce cup. It’s energy-efficient too with an auto-off feature that turns your brewer off one minute after the last single-cup brew and turns your heating plate off two hours after the last carafe brew too. Simple yet effective, it’s a dream come true for coffee fans.

Anker Eufy RoboVac G30 Verge — $149, was $350

Anker Eufy RoboVac G30 Verge on a white background.

One of the best robot vacuums out there, this Anker Eufy RoboVac G30 Verge is going to save you or your loved one so much time. It’s a smart robot vacuum with plenty of power thanks to a 2,000Pa suction engine which means it can cope with pretty much any spill that could occur at home. With Wi-Fi, you can use the Eufy app to keep it out of areas you don’t want it to go as well as check its cleaning history over time. Boundary strips further help here so you won’t have to worry about your kids’ toys being disrupted, for instance. Everything about it is super convenient.

Hewlett Packard Hp 27m 27-inch Monitor — $175, was $199

Hewlett Packard Hp 27m 27-inch Monitor on a white background.

If you want to treat a computer addict this Christmas, buy them this Hewlett Packard Hp 27m 27-inch Monitor. It offers a lot of what you would see from the best monitors and is sure to prove useful for way longer than just the holiday season. It’s a full HD monitor that combines some fantastic visuals with some other high-quality features. It offers virtually no bezel so it looks super smart on your desk while it delivers clear and vivid images every time. Whether you’re looking for a monitor for gaming or office work, it’s a great choice. A 5ms response time and Low Blue Light mode prove extra beneficial, with the latter protecting your eyes over extended periods of use.

Bose QuietComfort 35 II — $199, was $299

Bose QuietComfort 35 II on a white background.

The Bose QuietComfort 35 II continue to be fantastic headphones for anyone that loves to hear their music crystal clear and at high quality. Thanks to superior noise cancellation features, they’re a fantastic choice if the person you’re buying for can’t bear noise while they try to work or simply relax. Long battery life of up to 20 hours means they won’t have to worry about recharging too often either. Other features include volume-optimized EQ for balanced audio performance plus Google Assistant support for hands-free use.

Hisense 58-inch Class 4K TV — $380, was $425

Hisense 58-inch 4K TV on a white background.

From one of the best TV brands, Hisense, you can buy a huge 58-inch 4K TV. It’s fantastic to use thanks to its 4K resolution but it offers so much more than that. There’s Dolby Vision support along with a smart game mode that means input lag is significantly improved while you play. Alongside that is Motion Rate image processing technology so it can keep up with fast-moving action. If the person you’re buying for loves action movies, sports, or gaming, this is a particularly great purchase to make.

We strive to help our readers find the best deals on quality products and services, and we choose what we cover carefully and independently. The prices, details, and availability of the products and deals in this post may be subject to change at anytime. Be sure to check that they are still in effect before making a purchase.

Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

AI tech drives transformation of F1 racing

This post was written by Will Owen, BD Associate, at Valkyrie.

A short history of data in F1

Until the 1980s, cars were all mechanical. Computers were too large and slow to be useful on race cars, so the driver was the only source of “data” for the racing engineer. As amazing as drivers are at “feeling” the car, it’s tough for any driver to recall objective measurements about how the car performed in a session when they are busy focusing on driving.

Once electronics became small enough, they started becoming critical to operating car systems, such as fuel management and engine timing. As more and more sensors were attached to various systems of the car, more data was gathered on both car performance and reliability. At first, cars stored only a small amount of sensor data in the memory built into the car’s computer. Engineers could access it after the race but not during it. As technology progressed, cars gained the ability to send small amounts of data back to the pitlane while they were on track, paving the way for a new era in motorsports.

By the early 1990s, cars became completely dependent on computer processing to generate lap time and performance. Active suspension, traction control, power-assisted controls, and many more systems all required some kind of digital processing. Many of these driver aid systems were banned not long after they were implemented for various sporting and budgetary reasons. As technology continued to progress, teams were able to mount more and more types of sensors on the car in order to get a complete digital picture of how the car was performing on the track.

Where we are now

Nowadays, the use of data is prolific in all areas of decision-making in Formula 1 racing, except for driver aids built into the car. From car development to race strategy, the telemetry that is broadcast from the race car is invaluable to finding performance and achieving results. To make use of the massive amounts of data that the car generates when it’s running on the track, F1 teams set up their own portable IT infrastructure that supports their engineer’s computing needs during the race event. In addition to personnel at the track, Formula 1 team’s home bases house permanent engineering and data centers, where dozens of engineers work tirelessly on live telemetry coming from the race car. Every bit of data gathered when the car is on the racetrack is critical for giving the engineers who built the car feedback on their car work. Time is of the essence on the race weekend, since decisions have to be made quickly on what parts to use. Modern innovations like cloud computing and data science enable humans to make those critical decisions from larger amounts of data.

Car design is a highly technical frontier that requires the world’s best computer scientists, racing engineers, and physicists all working together to produce the best performing and most elite race car possible within the rules. Race teams now program custom software to assist with designing the car. The process of developing a race car looks much different today than it did in the past, and now depends on computer-aided design (CAD) to identify improvements with maximum precision. In particular, teams use computational fluid dynamics (CFD) to simulate their cars’ aerodynamics with different configurations and parts. All of these techniques require robust data systems that can handle the computing power needed for design.

Racing to push the boundaries

Formula 1 will continue to push boundaries for all motorsports when it comes to using data to improve performance. All racing teams, but especially those in Formula 1, have to constantly innovate their methods to keep up with competitors. As budget restraints are increasingly imposed on teams to make the sport more equitable, Formula 1 teams will need to rely more heavily on simulations to test their new cars and subcomponents. Simulations are built on models of race cars that allow engineers to “drive” the car in the computer based on certain parameters, resulting in data generated in the same format as the real race car. Making effective simulations depends on having accurate models of how the car performs in the real world, and how external factors affect car performance. Teams will have to pioneer new methods to simulate cars with greater accuracy, and these will undoubtedly involve both the powers of artificial intelligence (AI) and machine learning (ML) to access a level of detail beyond human engineers.

As teams have a finite budget, it is simply not possible to hire enough engineers to comprehensively review all the sensor data that come off of the race cars. Current artificial intelligence capabilities help process vast quantities of data and highlight areas for human engineers to look for performance gains. The next generation of AI techniques integrated into racing will play a prominent role in car setup and design decisions that yield the best results on track.

The complexity of the racing environment will be a true test for the collaboration between human engineers and artificial intelligence. Getting the right insights for car performance from data requires more than just the processing of sensors. Data-driven racing requires a deep understanding of how the racing environment works and what tradeoffs are acceptable for all other elements of racing besides just pure performance. AI systems informing engineers will have to become more “aware” of the context where cars are performing. Otherwise, they will always be reliant on the brains of racing engineers.

This story originally appeared on Www.valkyrie.ai. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link