Categories
Computing

Nvidia’s RTX 4000 get new specs, and it’s not all good news

Nvidia’s upcoming Ada Lovelace graphics cards just received a new set of rumored specifications, and this time around, it’s a bit of a mixed bag.

While the news is good for one of the GPUs, the RTX 4070 actually received a cut when it comes to its specs — but the leaker says this won’t translate to a cheaper price.

And TBP, 450/420?/300W.

— kopite7kimi (@kopite7kimi) June 23, 2022

The information comes from kopite7kimi, a well-recognized name when it comes to PC hardware leaks, who has just revealed an update to the specifications of the RTX 4090, RTX 4080, and the RTX 4070. While we’ve already heard previous whispers about the specs of the RTX 4090 and the RTX 4070, this is the first time we’re getting predictions about the specs of the RTX 4080.

Let’s start with the good news. If this rumor is true, the flagship RTX 4090 seems to have received a slight bump in the core count. The previously reported number was 16,128 CUDA cores, and this has now gone up to 16,384 cores, which translates to an upgrade from 126 streaming multiprocessors (SMs) to 128. As for the rest of the specs, they remain unchanged — the current expectation is that the GPU will get 24GB of GDDR6X memory across a 384-bit memory bus, as well as 21Gbps bandwidth.

The RTX 4090 includes the AD102 GPU, which maxes out at 144 SMs, but it seems unlikely that the RTX 4090 itself will ever reach such heights. The full version of the AD102 GPU is probably going to be found in an even better graphics card, be it a Titan or simply an RTX 4090 Ti. It’s also rumored to have monstrous power requirements. This time around, kopite7kimi didn’t reveal anything new about that card, and as of now, we still don’t know for a fact that it even exists.

Moving on to the RTX 4080 with the AD103 GPU, it’s said to come with 10,240 CUDA cores and 16GB of memory. However, according to kopite7kimi, it would rely on GDDR6 memory as opposed to GDDR6X. Seeing as the leaker predicts it to be 18Gbps, that would actually make it slower than the RTX 3080 with its 19Gbps memory. The core count is exactly the same as in the RTX 3080 Ti. So far, this GPU doesn’t sound very impressive, but it’s said to come with a much larger L2 cache that could potentially offer an upgrade in its gaming performance versus its predecessors.

Jacob Roach / Digital Trends

When it comes to the RTX 4070, the GPU was previously rumored to come with 12GB of memory, but now, kopite7kimi predicts just 10GB across a 160-bit memory bus. It’s said to offer 7,168 CUDA cores. While it’s certainly an upgrade over the RTX 3070, it might not quite be the generational leap some users are hoping for. It’s also supposedly not going to receive a price discount based on the reduction in specs, but we still don’t know the MSRP of this GPU, so it’s hard to judge its value.

Lastly, the leaker delivered an update on the power requirements of the GPUs, which have certainly been the subject of much speculation over the last few months. The predicted TBP for the RTX 4090 is 450 watts. It’s 420 watts for the RTX 4080 and 300 watts for the RTX 4070. Those numbers are a lot more conservative than the 600 watts (and above) that we’ve seen floating around.

What does all of this mean for us — the end-users of the upcoming RTX 40-series GPUs? Not too much just yet. The specifications may yet change, and although kopite7kimi has a proven track record, they could be wrong about the specs, too. However, as things stand now, only the RTX 4090 seems to mark a huge upgrade over its predecessor while the other two are a much more modest change. It remains to be seen whether the pricing will reflect that or not.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s new liquid-cooled GPUs are heading to data centers

Nvidia is taking some notes from the enthusiast PC building crowd in an effort to reduce the carbon footprint of data centers. The company announced two new liquid-cooled GPUs during its Computex 2022 keynote, but they won’t be making their way into your next gaming PC.

Instead, the H100 (announced at GTC earlier this year) and A100 GPUs will ship as part of HGX server racks toward the end of the year. Liquid cooling isn’t new for the world of supercomputers, but mainstream data center servers haven’t traditionally been able to access this efficient cooling method (not without trying to jerry-rig a gaming GPU into a server, that is).

In addition to HGX server racks, Nvidia will offer the liquid-cooled versions of the H100 and A100 as slot-in PCIe cards. The A100 is coming in the second half of 2022, and the H100 is coming in early 2023. Nvidia says “at least a dozen” system builders will have these GPUs available by the end of the year, including options from Asus, ASRock, and Gigabyte.

Data centers account for around 1% of the world’s total electricity usage, and nearly half of that electricity is spent solely on cooling everything in the data center. As opposed to traditional air cooling, Nvidia says its new liquid-cooled cards can reduce power consumption by around 30% while reducing rack space by 66%.

Instead of an all-in-one system like you’d find on a liquid-cooled gaming GPU, the A100 and H100 use a direct liquid connection to the processing unit itself. Everything but the feed lines is hidden in the GPU enclosure, which itself only takes up one PCIe slot (as opposed to two for the air-cooled versions).

Data centers look at power usage effectiveness (PUE) to gauge energy usage — essentially a ratio between how much power a data center is drawing versus how much power the computing is using. With an air-cooled data center, Equinix had a PUE of about 1.6. Liquid cooling with Nvidia’s new GPUs brought that down to 1.15, which is remarkably close to the 1.0 PUE data centers aim for.

Energy usage for Nvidia liquid-cooled data center GPUs.

In addition to better energy efficiency, Nvidia says liquid cooling provides benefits for preserving water. The company says millions of gallons of water are evaporated in data centers each year to keep air-cooled systems operating. Liquid cooling allows that water to recirculate, turning “a waste into an asset,” according to head of edge infrastructure at Equinix Zac Smith.

Although these cards won’t show up in the massive data centers run by Google, Microsoft, and Amazon — which are likely using liquid cooling already — that doesn’t mean they won’t have an impact. Banks, medical institutions, and data center providers like Equinix compromise a large portion of the data centers around today, and they could all benefit from liquid-cooled GPUs.

Nvidia says this is just the start of a journey to carbon-neutral data centers, as well. In a press release, Nvidia senior product marketing manager Joe Delaere wrote that the company plans “to support liquid cooling in our high-performance data center GPUs and our Nvidia HGX platforms for the foreseeable future.”

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How to watch Nvidia’s Computex 2022 keynote

Next week, Nvidia will be presenting its Computex 2022 keynote, where the company will discuss current and upcoming products for data centers, professional applications, and gaming. It’s not entirely clear what exactly the company will be talking about, and although rumors range from a new low-end GPU to an announcement of next-gen GPUs, Nvidia is always very secretive so we’ll just have to wait and see.

Here’s where you can watch Nvidia’s Computex keynote and what you can expect the company to announce.

How to watch Nvidia’s Computex 2022 keynote

Six different Nvidia executives will speak at Nvidia’s keynote, which starts at 8 p.m. PT on May 23. Computex is hosted in Taiwan, which means their afternoon is America’s late night, so you might have to stay up late to catch the presentation.

Nvidia is likely going to stream the presentation on its YouTube channel, as it typically does for Computex and events like GTC. After the stream is over, we expect a recording to be available on the YouTube page.

Following the presentation, Nvidia will host a talk specifically about Omniverse, hosted by Richard Kerris, Nvidia’s vice president of Omniverse. The talk will cover “the enormous opportunities simulation brings to 3D workflows and the next evolution of A.I.”

What to expect from Nvidia’s Computex 2022 keynote

Nvidia is notoriously tight-lipped about its upcoming products. In fact, ever since the GTX 10-series, Nvidia has always announced new gaming GPUs just weeks before launch, which is very different from rivals AMD and Intel, as they tend to announce big products more than a month away from launch. So, we’re either on the cusp of the next generation (presumably the RTX 40-series) or still some months away.

Jeff Fisher presenting the RTX 3090 Ti.

One hint comes from the list of speakers. When it comes to gaming news, we’re really interested in Jeff Fisher, Senior Vice President of GeForce. He previously announced the RTX 3090 Ti at CES 2022, which has led some to claim that this is proof he’s back again to announce the RTX 40-series. But it’s hard to imagine Nvidia CEO Jensen Huang not announcing the launch of a new generation of gaming GPUs in his famous kitchen. If Fisher is announcing a new gaming GPU, it’s more likely to be the rumored GTX 1630.

There are five other speakers at Nvidia’s keynote, but they’re expected to talk about data centers, professional GPUs, and automotive, not gaming GPUs. Unfortunately, if you’re not really into enterprise-grade hardware, you probably aren’t the target demographic of this keynote. Still, Nvidia does what Nvidia wants and we can never be too sure what it’s going to show at a big event like Computex.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s GeForce RTX 3090 Ti GPU Set for January Launch

Nvidia is reportedly set to add three new variations to its GeForce RTX 30 series of GPUs, with the flagship RTX 3090 Ti apparently due for a release next month.

According to an embargoed document uncovered by VideoCardz, the highly anticipated RTX 3090 Ti GPU will be released on January 27, 2022. Also expected to be released on that same date is the GeForce RTX 3050 8GB graphics card. With CES 2022 around the corner, expect Nvidia to formally introduce these video cards at the event.

Elsewhere, Nvidia is said to be planning to announce the upgraded RTX 3070 Ti 16GB model next week on December 17, while a launch to consumers is scheduled for January 11. As for its specifications, VideoCardz notes how the GPU will contain the same CUDA core count as the 8GB model, in addition to the same clock speeds.

The card will also come with 16GB of GDDR6X memory, according to Wccftech, which means the standard GDDR6 modules found on the current GeForce RTX 3070 graphics card are being upgraded.

Nvidia’s GeForce RTX 3050 8GB, meanwhile, is rumored to deliver 3072 CUDA cores in 24 SM units through the GA106-150 GPU, joined by 8GB of GDDR6 memory. Ultimately, such specs would make the card an attractive option in the mainstream segment of the market.

As for the powerful RTX 3090 Ti, which is obviously geared toward enthusiasts, previous rumors have given us an insight into what to expect from the card. It’s expected to feature 21Gbps of GDDR6X memory based on 2GB GDDR6X memory modules. Notably, this will allow the GPU to sport 1TBps of bandwidth.

Next-generation standards such as PCIe Gen 5.0 will be supported by a new 16-pin connector, while a 450 W TDP will offer increased power consumption; the RTX 3090 Ti is set to be Nvidia’s first video card for the consumer market to utilize the full GA102 GPU via its 10,752 CUDA cores.

Nvidia’s keynote at CES 2022 takes place on January 4, aptly providing it with an opportunity to unveil the aforementioned Ampere graphics cards. Getting your hands on these upcoming GPUs, however, is another discussion entirely due to the current worldwide shortage. Nvidia recently stated that long-term agreements with manufacturers meant supplies could improve during the second half of 2022.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Nvidia’s latest AI tech translates text into landscape images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don’t exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings.

“Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images,” Isha Salian, a member of Nvidia’s corporate communications team, wrote in a blog post. “Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky.”

Generated images from text

GauGAN2, whose namesake is post-Impressionist painter Paul Gauguin, improves upon Nvidia’s GauGAN system from 2019, which was trained on more than a million public Flickr images. Like GauGAN, GauGAN2 has an understanding of the relationships among objects like snow, trees, water, flowers, bushes, hills, and mountains, such as the fact that the type of precipitation changes depending on the season.

GauGAN and GauGAN2 are a type of system known as a generative adversarial network (GAN), which consists of a generator and discriminator. The generator takes samples — e.g., images paired with text — and predicts which data (words) correspond to other data (elements of a landscape picture). The generator is trained by trying to fool the discriminator, which assesses whether the predictions seem realistic. While the GAN’s transitions are initially poor in quality, they improve with the feedback of the discriminator.

Unlike GauGAN, GauGAN2 — which was trained on 10 million images — can translate natural language descriptions into landscape images. Typing a phrase like “sunset at a beach” generates the scene, while adding adjectives like “sunset at a rocky beach” or swapping “sunset” to “afternoon” or “rainy day” instantly modifies the picture.

GauGAN2

With GauGAN2, users can generate a segmentation map — a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like “sky,” “tree,” “rock,” and “river” and allowing the tool’s paintbrush to incorporate the doodles into images.

AI-driven brainstorming

GauGAN2 isn’t unlike OpenAI’s DALL-E, which can similarly generate images to match a text prompt. Systems like GauGAN2 and DALL-E are essentially visual idea generators, with potential applications in film, software, video games, product, fashion, and interior design.

Nvidia claims that the first version of GauGAN has already been used to create concept art for films and video games. As with it, Nvidia plans to make the code for GauGAN2 available on GitHub alongside an interactive demo on Playground, the web hub for Nvidia’s AI and deep learning research.

One shortcoming of generative models like GauGAN2 is the potential for bias. In the case of DALL-E, OpenAI used a special model — CLIP — to improve image quality by surfacing the top samples among the hundreds per prompt generated by DALL-E. But a study found that CLIP misclassified photos of Black individuals at a higher rate and associated women with stereotypical occupations like “nanny” and “housekeeper.”

GauGAN2

In its press materials, Nvidia declined to say how — or whether — it audited GauGAN2 for bias. “The model has over 100 million parameters and took under a month to train, with training images from a proprietary dataset of landscape images. This particular model is solely focused on landscapes, and we audited to ensure no people were in the training images … GauGAN2 is just a research demo,” an Nvidia spokesperson explained via email.

GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. In September 2018, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. That same year, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos.

GauGAN’s initial debut preceded GAN Paint Studio, a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, creating images and storyboards from natural language captions, and animating and syncing facial movements with audio clips containing human speech.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

The latest version of NVIDIA’s DLSS technology is better at rendering moving objects

NVIDIA has released a major update for its technology. With of the software, the company says the AI algorithm makes smarter use of motion vectors to improve how objects look when they’re moving. The update also helps to reduce ghosting, make particle effects look clearer and improve temporal stability. The latter has traditionally been one of the weakest aspects of the technology, so DLSS 2.3 represents a major improvement. As of today, 16 games feature support for DLSS 2.3. Highlights include Cyberpunk 2077, Deathloop and Doom Eternal.

If you don’t own an but still want to take advantage of the performance boost you can get from upscaling a game, NVIDIA has updated its Image Scaling technology to improve both fidelity and performance. Accessible through the NVIDIA Control Panel, the tool uses spatial upscaling to do the job. That means the result isn’t as clean as the temporal method DLSS uses, but the advantage is you don’t need special hardware. To that end, NVIDIA is releasing an SDK that will allow any GPU, regardless of make, to take advantage of the technology. In that way, NVIDIA says game developers can offer the best of both worlds: DLSS for the best possible image quality and NVIDIA Image Scaling for cross-platform support.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

BMW uses Nvidia’s Omniverse to build state-of-the-art factories

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


BMW has standardized on a new technology unveiled by Nvidia, the Omniverse, to simulate every aspect of its manufacturing operations, in an effort to push the envelope on smart manufacturing.

BMW has done this down to work order instructions for factory workers from 31 factories in its production network, reducing production planning time by 30%, the company said.

During Nvidia’s GTC November 2021 Conference, members of BMW’s Digital Solutions for Production Planning and Data Management for Virtual Factories provided an update on how far BMW and Nvidia have progressed in simulating manufacturing operations relying on digital twins. Their presentation, BMW and Omniverse in Production, provides a detailed tour of how the Regensburg factory has a fully functioning, real-time digital twin capable of simulating at scale production and finite scheduling based on constraints down to work order instructions and robotics programming on the shop floor.

Improving product quality, reducing manufacturing costs and unplanned downtime while increasing output, and ensuring worker safety are goals all manufacturers strive for, yet seldom reach consistently. Achieving these goals has much more to do with how fluid and real-time the data from production and process monitoring, product definition, and shop floor scheduling is shared across manufacturing in a comprehensible format each team can use.

Overcoming the challenges of achieving these goals motivates manufacturers to adopt analytics, AI, and digital twin technologies. At the heart of these challenges is the need to accurately decipher the massive amount of data manufacturing operations generate daily. Getting the most value out of data that any given manufacturing operation generates daily is the essence of smart manufacturing.

Defining What A Factory of the Future Is

McKinsey and the World Economic Forum (WEF) are studying what sets exceptional factories apart from all the others. Their initial collaborative research and many subsequent research studies, including the creation of the Shaping the Future of Advanced Manufacturing and Production Platform, reflect how productive the collaborative efforts of McKinsey and the WEF are today. In addition, McKinsey and WEF have set high standards in their definition of what a factory of the future is, as they’re providing ongoing analysis of the select group of manufacturers’ operations for clients.

According to McKinsey and WEF, lighthouse manufacturers scale pilots into integrated production at scale. They’re also known for their scalable technology platforms, strong performance on change management, and adaptability to changing supply chain, market, and customer constraints, while maintaining visibility and cost control across the manufacturing process. BMW Automotive is an inaugural member of the lighthouse manufacturing companies McKinsey and WEF first identified after evaluating over 1,000 companies. The following graphic from McKinsey and WEF’s research provides a geographical view of lighthouse manufacturers’ factory locations globally.

McKinsey and WEF's ongoing collaboration provides new insights into how manufacturers can continue to adopt new technologies to improve operations, add greater visibility and control across shop floors, and keep costs in check. Source: McKinsey and Company, 'Lighthouse' manufacturers lead the way—can the rest of the world keep up?   

Above: McKinsey and WEF’s ongoing collaboration provides new insights into how manufacturers can continue to adopt new technologies to improve operations, add greater visibility and control across shop floors, and keep costs in check. Source: McKinsey and Company, ‘Lighthouse’ manufacturers lead the way—can the rest of the world keep up?

BMW’s Factories of the Future Blueprint

The four sessions BMW contributed to during Nvidia’s GTC November 2021 Conference together provide a blueprint of how BMW transforms its production centers into factories of the future. Core to their blueprint is getting back-end integration services right, including real-time integration with ProjectWise, BMW internal systems Prisma and MAPP, and Tecnomatix eMS. BMW relies on Omniverse Connectors that support live sync with each application on the front end of their tech stacks. Front-end applications include many leading 2D and 3D computer-aided design (CAD), real-time visualization, product lifecycle management (PLM), and advanced imaging tools. BMW standardized on Nvidia Omniverse as the centralized platform to integrate the various back-end and front-end systems at scale so their tech stack could scale and support analytics, AI, and digital twin simulations across 31 manufacturing plants.

Excel at customizing models in real-time

How BMW deployed Nvidia Omniverse explains why they’re succeeding with their factory of the future initiatives while others fail. BMW recognized early that each system’s different clock speeds or cadences integral to production, from CAD and PLM to ERP, MES, Quality Management, and CRM, needed to be synchronized around a single source of data everyone could understand. Nvidia Omniverse acts as the data orchestrator and provides information every department can interpret and act on. “Global teams can collaborate using different software packages to design and plan the factory in real-time, using the capability to operate in a perfect simulation, which revolutionizes BMWs planning processes,” says Milan Nedeljković, member of the Board of Management of BMW AG.

Product customizations dominate BMW’s product sales and production. They’re currently producing 2.5 million vehicles per year, and 99% of them are custom. BMW says that each production line can be quickly configured to produce any one of ten different cars, each with up to 100 options or more across ten models, giving customers up to 2,100 ways to configure a BMW. In addition, Nvidia Omniverse gives BMW the flexibility to reconfigure its factories quickly to accommodate new big model launches.

Simulating line improvements to save time

BMW succeeds with its product customization strategy because each system essential to production is synchronized on the Nvidia Omniverse platform. As a result, every step in customizing a given model reflects customer requirements and also be shared in real-time with each production team. In addition, BMW says real-time production monitoring data is used for benchmarking digital twin performance. With the digital twins of an entire factory, BMW engineers can quickly identify where and how each specific models’ production sequence can be improved. An example is how BMW uses digital humans and simulation to test new workflows for worker ergonomics and efficiency, training digital humans with data from real associates. They’re also doing the same with the robotics they have in place across plant floors today. Combining real-time production and process monitoring data with simulated results helps BMW’s engineers quickly identify areas for improvement, so quality, cost, and production efficiency goals keep getting achieved.

BMW simulates robotics improvements using Nvidia's Omniverse first before introducing them into production runs to ensure greater accuracy, product quality, and cost goals are going to be met.  

Above: BMW simulates robotics improvements using Nvidia’s Omniverse first before introducing them into production runs to ensure greater accuracy, product quality, and cost goals are going to be met.

For any manufacturer to succeed with a complex product customization strategy like BMW has, all the systems that manufacturing relies on must be in sync with each other in real-time. There needs to be a common cadence the systems are operating at, providing real-time data and information each team can use to do their specific jobs. BMW is achieving this today, enabling them to plan down to the model-by-model configuration level at scale. They’re also able to test each model configuration in a fully functioning digital twin environment in Nvidia’s Omniverse, and then reconfigure production lines to produce the new models. Real-time production and process monitoring data from existing production lines and digital twins help BMW’s engineering, and production planning teams know where, how, and why to modify digital twins to completely test any new improvement before making it live in production.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Seeing into our future with Nvidia’s Earth-2

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


This article was contributed by Jensen Huang, Founder and CEO, NVIDIA

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms, and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry, and biology of the atmosphere, waters, ice, land, and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface, and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed, and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities, and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

[Note: A version of this story originally ran on the NVIDIA blog.]

Jensen Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer, and a member of the board of directors. In 2017, he was named Fortune’s Businessperson of the Year. In 2019, Harvard Business Review ranked him No. 1 on its list of the world’s 100 best-performing CEOs over the lifetime of their tenure. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Nvidia’s GTC will draw 200K researchers for online event including metaverse session

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


The metaverse may be the stuff of science fiction, but it’s going to make an appearance at a pretty serious tech event: Nvidia’s annual GPU Technology Conference (GTC), an online event happening November 8-11.

GTC expected to draw more than 200,000 attendees including innovators, researchers, thought leaders, and decision-makers. More than 500 sessions focus on deep learning, data science, HPC, robotics, data center/networking, and graphics. Speakers will discuss the latest breakthroughs in healthcare, transportation, manufacturing, retail, finance, telecoms, and more.

I’m moderating a session on the vision for the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. The panelists include Tim Sweeney, CEO of Epic Games; Morgan McGuire, chief scientist at Roblox; Willim Cui, vice president of Tencent Games; Jinsoo Jeon, head of metaverse at SK Telecom; Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia; Christina Heller, CEO of Metastage; and Patrick Cozzi, CEO of Cesium. (We’ll air the panel at our own GamesBeat Summit Next event on November 9-10.)

“It’s a different twist to have a metaverse session,” said Estes. “You know that the metaverse has become top of mind with so many other companies talking about it. Omniverse [the metaverse for engineers] is our product in that area. And so we’re, we’re clearly leaning into that, but Omniverse isn’t the only thing going on. And so we were welcoming and embracing other other conversations about that, because in typical Nvidia fashion, a lot of our success model is the fact that we are Switzerland. We’re a platform and a lot of companies are doing great work on our platform. ”

Webinar

Three top investment pros open up about what it takes to get your video game funded.


Watch On Demand

That’s the general spirit of a lot of the sessions at GTC, Estes said.

Jensen Huang is CEO of Nvidia. He gave a virtual keynote at the recent GTC event.

Above: Jensen Huang is CEO of Nvidia. He gave a virtual keynote at the recent GTC event in the spring and will do so again in November.

Image Credit: Nvidia

“GTC is is attendees can hear from innovators who are in the same general space, but they’re taking different approaches to things,” Estes said. “There are a lot of things about the metaverse that are complementary to the Omniverse.”

Other companies represented among the speakers include Amazon, Arm, AstraZeneca, Baidu, BMW, Domino’s, Electronic Arts, Epic Games, Ford, Google, Kroger, Microsoft, MIT, Oak Ridge National Laboratory, OpenAI, Palo Alto Networks, Red Hat, Rolls-Royce, Salesforce, Samsung, ServiceNow, Snap, Stanford University, Volvo, and Walmart.

And Nvidia CEO Jensen Huang will announce new AI technologies and products in his keynote presentation, which will be livestreamed on Nov. 9 at 9 am Central European Time/4 pm China Standard Time/12 a.m. Pacific Standard Time. It will be rebroadcast at 8 am PST for viewers in the Americas.

“It’s fair to say that you can expect to hear product and technology announcements. From Jensen, you can expect to hear about new partnerships and lots of examples of actually implementing AI on the leading edge,” Estes said. “We’ll have a number of examples of lighthouse customers and end users and our ecosystem partners.”

Online-only approach

Nvidia's Cambridge-1 will be available to external U.K. scientists.

Above: Nvidia’s Cambridge-1 will be available to external U.K. scientists.

Image Credit: Nvidia

It’s the second major GTC event of the year. Traditional, Nvidia held a big event in the spring and then a lot of smaller regional events. But with the pandemic, that has evolved into two major online events, said Greg Estes, vice president of corporate marketing and developer programs at Nvidia, in an interview with VentureBeat.

Because of the delta variant of COVID-19, Nvidia opted to do another online-only event for the fall GTC.

“As for going back to physical events, we’re hoping for the spring but it’s of course hard to say,” Estes said. “On the other hand, I can’t see us doing physical-only ever again. There will always be really solid digital components going forward. It’s just been too successful. People like it a lot. And we draw a lot more people. And also we can also get to some speakers that we couldn’t get to before.”

Nvidia will make sessions available for viewing after the event.

“We’re expecting more than 200,000 registrations, which is what we had in the spring,” Estes said. “It’s just a fantastic thing to have that much interest and that many connections. For our developer community, we take all the GTC session and we make them available in perpetuity for free. We archive these talks on Nvidia on demand.”

For social interaction, Nvidia is using a third-party app dubbed BrainDate to arrange meetings. But Estes note that due to the resurgence in COVID that the company wasn’t comfortable having a lot of in-person gatherings yet. Over time, he expects that virtual reality meetings, events, and collaborations will take off, as it can be more convenient than travel for a lot of people.

“AI technology is evolving so quickly that it makes sense to have more than one event a year,” Estes said.

Other sessions

GPUs in the Nvidia Cambrigde-1.

Above: GPUs in the Nvidia Cambrigde-1.

Image Credit: Nvidia

Ilya Sutskever, chief scientist at OpenAI, will discuss the history of deep learning and what the future might hold. Fei-Fei Li, professor of computer science at Stanford University, will discuss ambient intelligence (smart, sensor-based solutions) to illuminate the dark spaces of healthcare and take part in a Q&A with Kimberly Powell, Nvidia’s vice president of healthcare.

Bei Yang, vice president and technology studio executive at Disney Imagineering, will discuss how the company is using advanced technologies to “imagineer” the metaverse.

Shashi Bhushan, principal AI software and systems architect at Lockheed Martin, will describe how the company is using Nvidia Omniverse, the “metaverse for engineers,” to predict and fight wildfires.

Ross Krambergar, digital solutions for production planning at BMW, will describe how BMW is utilizing Nvidia Omniverse to realize their vision for a digital twin factory of the future to increase manufacturing flexibility.

Keith Perry, chief information officer at St. Jude Children’s Research Hospital, will explain how they used data science to advance treatments for life-threatening diseases in children. Nir Zuk, chief technology officer at Palo Alto Networks, will speak about AI for cybersecurity.

Anima Anandkumar, director of machine learning research at Nvidia and professor at Caltech, will speak in a panel on measuring and mitigating bias in AI models and run a session on advances in the convergence of AI and scientific computing.

Keith Strier, vice president of worldwide AI initiatives at Nvidia, and Mark Andrijanič, minister for digital transformation of Slovenia, will participate in a fireside chat to discuss how countries need to invest in AI, including infrastructure and data scientists.

Scientists at MIT, Amazon Web Services’ Sustainable Data Initiative, and Nvidia will explain how a group of public and private sector entities is providing climate data to scientists.

An expert panel will talk about the potential of Universal Scene Description (USD) for 3D creators in all industries. The panel includes Sebastian Grassia, project lead for USD at Pixar; Mohsen Rezayat, chief solutions architect at Siemens; Shawn Dunn, senior product manager at Epic Games; Simon Haegler, senior software developer at Esri R&D Center Zurich; Hilda Espinal, chief technology officer at CannonDesign; and Michael Kass, senior distinguished engineer at Nvidia.

Axel Gern, CTO at Daimler Trucks, will explain the strategy, challenges and opportunities of developing software-defined trucks for an autonomous future.

And Nvidia’s graphics wizards will reveal the technologies they used to create a virtual Jensen for the previous spring GTC keynote.

Emerging markets

Nvidia's Inception AI startups are from the green countries.

Above: Nvidia’s Inception AI startups are from the green countries.

Image Credit: Nvidia

GTC will feature a series of sessions focused on business and technical topics in Africa, the Middle East and Latin America.

Speakers from organizations and universities, such as the Kenya AI Center of Excellence, Ethiopian Motion Design and Visual Effects Community, Python Ghana, Nairobi Women in Machine Learning & Data Science, and Chile Inria Research Center, will describe how emerging market developers are using AI to address challenges.

“We have more international speakers, and more content that shifts towards Europe in the Middle East,” Estes said. “AI is the center of gravity, but it’s not the only thing we’re doing. One of the things people are talking about is conversational AI. It touches a lot of different industries, from chatbots for call centers to healthcare, where you have doctor who may have a patient where English isn’t their first language.”

A panel dubbed Bridging the Last Mile Gap with AI Education will feature cxperts and community leaders in Africa as they explain how they are democratizing AI and solving real-world challenges.

Latin American government, industry and academia will discuss the state of the AI ecosystem in Latin America and how to empower researchers and educators with GPUs and AI.

Experts will discuss natural language processing resources to build conversational AI for medium- and low-resource languages such as those in Africa, Arabia, and India.

Inception Venture Capital Alliance

Nvidia's Inception program has 8,500 AI startups.

Above: Nvidia’s Inception program has 8,500 AI startups.

Image Credit: Nvidia

Nvidia’s Inception AI program educates more than 8,500 companies that have potential for disruption. And Nvidia execs will talk about the company’s AI strategy and direction, focused on developers, startups, computing platforms, enterprise customers, and corporate development. More than 70 startups will share their business models involving conversational AI, drug discovery, autonomous systems, emerging markets, and other areas.

The panel will include Greg Estes, VP of corporate marketing and developer programs; Manuvir Das, head of enterprise computing; Shanker Trivedi, SVP of worldwide enterprise business; Vishal Bhagwati, head of corporate development; Mat Torgow, head of venture capital business development; and Kari Briski, VP of software product management for AI/HPC.

Ozzy Johnson, director of solutions architecture at Nvidia, will discuss technologies and key frameworks to accelerate a startup’s journey.

The pandemic has spurred investment and innovation in the healthcare and life sciences (HCLS) industry. Despite economic uncertainty, HCLS AI startups raised record funding. This panel will include the CEOs from startups Cyclica in biotech, IBEX in pathology, and Rayshape in ultrasound, moderated by Renee Yao, head of global healthcare AI startups at Nvidia, and cover AI in healthcare trends, challenges, and technical breakthroughs.

Diversity & Inclusion

Nvidia's Omniverse is a way to collaborate in simulated worlds.

Above: Nvidia’s Omniverse is a way to collaborate in simulated worlds.

Image Credit: Nvidia

GTC is structured as an open, all-access event available to virtually any community around the world. Sessions have been curated to inform and inspire developers, researchers, scientists, educators, professionals, and students from historically underrepresented groups.

Topics will include building better datasets and making AI more inclusive. Nvidia partners with organizations including LatinX in AI, Tech Career and W.AI in Israel, and Ewha Womans University of Korea to offer complimentary access to Nvidia Deep Learning Institute workshops for diverse communities.

“We’re doing a lot of educational programs and training with our Deep Learning Institute, and doing other initiatives with educators from historically black colleges and universities, and we’re doing things in Africa,” Estes said. “We’re doing things specifically targeting women in technology to try to bring these communities which have historically been underrepresented to train them better to avail them of the leading thinking to work with educators.”

Nvidia offers free teaching kits for educators to get children interested in AI and engineering.

“It’s important that we’re talking to the next generation coming up, helping both younger people and then mid-career professionals who want to learn new skills, ” Estes said.

One of the diversity sessions brings together academics, industry experts and the founder of W.AI to discuss how to help more women join the field of data science and AI through mentoring opportunities and supporting advanced degree enrollment.

Louis Stewart, head of strategic initiatives for Nvidia’s Developer Ecosystem, will speak with faculty and student researchers from the Africana Digital Ethnography Project on efforts to build new and unique datasets for better natural language understanding from all parts of the world.

An AI for Smart City session will talk about where AI has been deployed to solve urban challenges, ethical challenges associated with using AI in urban settings, and how it could address challenges stemming from urbanization, failing infrastructure, traffic management, population health difficulties, energy crises, and more.

The event will have regional speakers from Europe, the Middle East, Africa, Israel, India, China, Japan, South Korea, Taiwan, and southern Asia Pacific.

“There are smart people everywhere. And that’s a really important theme,” Estes said. “There is no reason in the world why certain countries should have an advantage over others when it comes to the brainpower of people doing AI work. We’re putting energy into reaching out to those communities. Africa is the example I gave earlier, but certainly in Latin America, and all across Asia Pacific, there is good thinking and great work being done today. In Singapore, and Vietnam, and other areas like that. And for us to be able to kind of bring that together in one place is really cool.”

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
Game

NVIDIA’s new ‘GeForce Now RTX 3080’ streams games at 1440p and 120 fps

NVIDIA has unveiled its next-generation cloud gaming platform called GeForce Now RTX 3080 with “desktop-class latency” and 1440p gaming at up to 120 fps on PC or Mac. The service is powered by a new gaming supercomputer called the GeForce Now SuperPod and costs double the price of the current Priority tier.

The SuperPod is “the most powerful gaming supercomputer ever built,” according to NVIDIA, delivering 39,200 TFLOPS, 11,477, 760 CUDA Cores and 8,960 CPU Cores. NVIDIA said it will provide an experience equivalent to 35 TFLOPs, or triple the Xbox Series X, roughly equal to a PC with an 8-core CPU, 28GB of DDR4-3200 RAM and a PCI-GEN4 SSD. 

NVIDIA launches GeForce Now RTX 3080-class gaming at up to 1440p 120fps

NVIDIA

As such, you’ll see 1440p gaming at up to 120fps on a Mac or PC, and even 4K HDR on a shield, though NVIDIA didn’t mention the refresh rate for the latter. It’ll also support 120 fps on mobile, “supporting next-gen 120Hz displays,” the company said. By comparison, the GeForce Now Priority tier is limited to 1080p at 60 fps, with adaptive VSync available in the latest update.

It’s also promising a “click-to-pixel” latency down to 56 milliseconds, thanks to tricks like adaptive sync that reduces buffering, supposedly beating other services and even local, dedicated PCs. However, that’s based on a 15 millisecond round trip delay (RTD) to the GeForce Now data center, something that obviously depends on your internet provider and where you’re located. 

NVIDIA’s claims aside, it’s clearly a speed upgrade over the current GeForce Priority tier, whether you’re on a mobile device or PC. There’s a price to pay for that speed, though. The GeForce Now premium tier started at $50 per year and recently doubled to $100, which is already a pretty big ask. But the RTX 3080 tier is $100 for six months (around double the price) “in limited quantities,” with Founders and priority early access starting today. If it lives up to the claims, it’s cheaper than buying a new PC, in any case. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link