Categories
Game

Nintendo’s Switch sales drop as it contends with chip shortage

Nintendo’s Switch sales fell significantly last quarter, dropping to 3.43 million units compared to 4.45 million during the same period last year, according to its earnings report. Software sales also fell to 41.4 million units compared to 45.3 million year over year. All that that resulted in an operating profit of 101.6 billion yen ($763 million), down from last year and short of what was expected. 

The company chalked up the Switch sales issue to a parts shortage, the same thing that bedeviled Sony during the same quarter. “Hardware production was impacted by factors such as the global shortage of semiconductor components, resulting in a decrease of hardware shipments,” the company said. It noted that the OLED model made up a large chunk of Switch sales with 1.52 million units sold, and the lower margins on that model dragged profit down a bit.

While game sales also dropped, Nintendo managed to boost the overall percentage of first-party games sold. In fact, it was the second best first quarter for first-party game sell-through since the Switch launched — second only to Q1 2021, which was fueled by Animal Crossing: New Horizons. All told, however, Nintendo would have to call the quarter a success considering that game buyers spent 13 percent less this year compared to 2021, according to Bloomberg

Some of that was aided by the launch of three key games, the company pointed out, particularly Nintendo Switch Sports which arrived on April 29th. Mario Strikers: Battles League launched on June 10th, while Fire Emblem Warriors arrived on June 24th. “More than 100 million users played Nintendo Switch in the latest 12-month period,” the company added. 

Nintendo is hoping that upcoming games will help out next quarter. Xenoblade Chronicles 3 just launched, Mario Kart 8 Deluxe – Booster Course Pass: Wave 2 arrives on August 4th, Splatoon 3 will be released on September 9th and you’ll see Kirby’s Dream Buffet sometime this summer. The company is also launching an OLED Switch Splatoon 3 Edition on August 26th. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Data recoverers finally cracks the locked down Apple M1 chip

Apple’s highly secure M1 chip is a tough nut to crack, but it appears the experts at DriveSavers have finally done it. The company announced they “may be the first” to recover data from the M1 in a recent press release.

DriveSavers is confident in this because the company’s engineers successfully transplanted an M1 chip from a faulty logic board to a functional one, which enables them to access the data.

It’s quite a feat, particularly because the M1 has a lot of security measures preventing outside users from manually accessing the data. For one, the SSD controller — the component that controls the input/output of data on the drive — is housed in M1 itself. That means, if the SoC fails, the ability to access the drive goes with it. Combine that with the T2 security chip’s encrypted data storage functionality, and it’s clear to see that accessing an Apple SSD is no easy feat anymore.

Then, there’s the logic board itself. As DriveSavers mentions in the press release: “There are thousands of surface-mounted micro-components on a logic board, and Apple has done their best to obfuscate what is necessary to gain access to the encrypted data. Without that knowledge, data recovery from a failed logic board is impossible.”

So, in order to access the M1 SSD, the data engineers had to remove the SoC from the faulty logic board and reattach it to a functioning one, all while nailing every micro-component needed to allow the logic board and the system components to communicate. It no doubt took a lot of trial and error, but this is a big step for data recovery on Macs.

While Apple’s obtuse data security measures sound great in theory, they can be a disaster if you lose important data. That’s why it’s important to create backups of Mac and any other Apple devices. Utilizing functions like Time Machine and even an external backup will stop you from trying to swap out a logic board on your own.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

12-inch MacBook (2023): new chip, thin design, and more

Apple unceremoniously killed off the 12-inch MacBook in 2019. While it was one of Apple’s most daring products in recent years, it ultimately failed to make as much of an impact as the company hoped. Yet rumors have been floating for some time that it could make a spectacular return.

If that proves to be true, it would be a comeback for the ages. So, what can we expect? Will the new model be like the original 12-inch MacBook, or will it be taken in a new direction? We’ve rounded up all the news and rumors into this post to help you get up to speed.

Price and release date

Right now, it looks like the 12-inch MacBook is a long way from release. Bloomberg reporter Mark Gurman, who has been one of the strongest proponents of the 12-inch MacBook revival idea, says the device is “still in early development” and probably won’t see the light of day until late 2023 or early 2024.

However, other prominent analysts are more skeptical. Display industry expert Ross Young and seasoned Apple tipster Ming-Chi Kuo both allegedly checked with their sources and came up empty-handed. Young, for his part, has said “Apple’s strategy for notebooks is currently 13” and larger. Companies in the MacBook Pro display supply chain we talked to are not aware of [the 12-inch MacBook].”

It could be that the sources Young and Kuo spoke to are not aware of the device simply because it is so early in the development process and manufacturers have thus not been informed of it. Or it could be that Gurman is wrong on the release date. Only time will tell.

As for the pricing, that remains unknown at this point. The 2015 12-inch MacBook came with a launch price of $1,299, but it’s possible the revived version could cost more since Apple has been increasing its laptop prices of late. Apple also needs to worry about how such a laptop would fit into its already-full lineup, which includes both the M2 MacBook Air and the M2 MacBook Pro.

Time for a fresh design?

cheap macbook deals

Recently, Apple has moved to bring its MacBook Pro and MacBook Air lines together in terms of visual styling, with both laptops adopting similar flat-edged designs. We think it’s therefore likely that the 12-inch MacBook will follow suit.

Its design will probably depend a lot on which MacBook family it is part of. Will it be a shrunken-down MacBook Air and take after that device’s design language, or will it have a slightly chunkier format akin to the MacBook Pro? Or will it instead sit in its own individual MacBook line and thus look slightly different? That’s unknown for now.

The original 12-inch MacBook was used as a pioneering device. It was revolutionary in many ways and led to many of its features and design elements showing up in other MacBooks. That torch seems to have been moved to the MacBook Pro and MacBook Air these days, but who knows? Maybe Apple will be ready for something entirely new by the time a new 12-inch MacBook comes around.

Much-improved performance

The performance of the 12-inch MacBook from 2015 was disappointing. Because it used such a tiny chassis, it had to use an M-series chip from Intel, which was not up to much more than web browsing and sending emails. Things have changed a huge amount since then, though, with Apple silicon meaning so much more is possible.

We expect the 12-inch MacBook’s Apple silicon chip will enable it to be much more capable than before. It won’t be on the level of the MacBook Pro, of course, as it will have a very different target audience, but it certainly won’t be anything to sniff at.

There is one other possibility, though. Apple leaker Majin Bu has claimed on Twitter that the laptop could be part of the MacBook Pro range and come with an M2 Pro or M2 Max chip. Majin Bu has a mixed track record, though, and we find this claim a little unlikely.

For one thing, if Gurman’s release schedule of late 2023 or early 2024 is correct, that would probably fall in the M3 Pro/Max release cycle, not the M2 series. As well as that, the 12-inch MacBook has always been positioned as a lightweight device primed for travel, not the intensive tasks Apple’s Pro and Max chips are designed for. Still, Majin Bu has been accurate in the past, so we can’t rule out this claim entirely.

Features: short on fans and ports?

Maurizio Pesce/Flickr

Given that the MacBook Air and 12-inch MacBook will likely be promoted as lightweight laptops that are perfect for traveling users, it wouldn’t surprise us if both ended up being fanless and silent in operation. Given how small the chassis is likely to be, it’s possible Apple won’t be able to squeeze a fan in there even if it wanted to.

The 2015 12-inch MacBook became infamous for only coming with one port — a single USB-C slot that handled both data and power and helped usher in the USB-C era. Considering ports take up space on motherboards — and considering how small the 12-inch MacBook’s motherboard will probably be — it wouldn’t come as a shock if Apple repeats that trick this time around.

There is a chance it could come with MagSafe too, though, since that’s made a comeback in recent devices. That’s just speculation at the moment.

There’s one final possibility: it might come with the butterfly keyboard. Apple outfitted the 2015 12-inch MacBook with this divisive keyboard in part because it was so thin and helped slim down the device, so it would make sense if it made a return. After all, Apple’s Phil Schiller has previously claimed the company is still working on this keyboard. Still, it would be a controversial move, especially considering how good the current Magic Keyboard is.

12-inch MacBook: our wishlist

macbook 12-inch

The original 12-inch MacBook was expensive for what it offered, partly because it was so experimental and different from anything Apple offered at the time. We therefore wouldn’t be surprised if the new 12-inch MacBook also cut back on features. Still, there are some other things we’d love to see.

First up is a decent webcam. Apple has started upgrading its laptops to 1080p webcams, but the company might be tempted to only equip the 12-inch MacBook with a 720p camera. That would be disappointing given it’ll likely be used on the road by many people, where video call quality is important.

We’re also hoping Apple will put more than one USB-C port on the laptop, as one is just not enough these days. It means you need to unplug your charging cable to hook up a peripheral, for example, which is just impractical.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

Motherboard Prices Are Rising, But Don’t Blame Chip Shortage

It’d be easy to blame the ongoing global chip shortage on the rising prices of motherboards, but with Intel’s latest Z690 motherboards, there’s reason to think that a larger issue is causing the price increase.

An article by TechPowerUp makes the point that that there are a lot of features that need to go into these new motherboards versus previous generations, like PCIe 5.0, DDR5, and the new LGA 1700 socket.

First off, one of the biggest differences between LGA 1700, which is what we get in the Z690 chipset, versus the previous socket, LGA 1200, is the number of pins. In the LGA 1700 socket, we get 1,700 little pins within the socket, whereas we got 1,200 in the LGA 1200, which occupied the Z590 chipset. Logically, more pins means more materials, thus more money needed to produce.

The question is, how much more money can 500 more little pins be? The answer is not concrete, but according to TechPowerUp, it’s around four times more expensive than LGA 1200.

Funny enough, the price difference between the Z590 and Z690 chipset is just $1.

While the new SSDs are not ready to be dispersed, PCIe 5.0 is ready on the Z690 chipset, which adds a price increase of about 10% to 20% over PCIe 4.0. That big price increase is due to what pieces are needed to implement PCIe 5.0.

The higher-end Z690 motherboards, like the Asus ROG Maximus Z690 Hero, utilize DDR5 RAM, which has an entirely different operating procedure that needs to be brought into play during design.

We also cannot ignore the overall design of the CPU that is being inserted into these motherboards, which is Intel’s 12th-Gen Alder Lake CPUs. Alder Lake is utilizing many different features from what we’re accustomed to, like dedicated performance cores.

All of the new features of the Z690 chipset holsters are unique and we’ve yet to see AMD include these features. Unfortunately, this is just another sign of the times where PC hardware has become more expensive and inaccessible to gamers who want in.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Texas Instruments Blamed As Root of Global Chip Shortage

Digital Trends may earn a commission when you buy through links on our site.

Explanations for the global chip shortage have been vague and varied, but a new report is shedding some light on the unexpected source of the shortage.

The report comes from DigiTimes, which claims that major tech companies are pointing to a singular cause for the global chip shortage — namely, the company Texas Instruments. That’s right, the company that makes the graphing calculators we all used in high school.

Texas Instruments does a lot more than make calculators these days. Taking a quick look at the Texas Instrument website, you’ll see that it offers a gargantuan amount of products, including PWM controllers, RGB LED display drivers, and more.

Most importantly, the company has made a name for itself as a manufacturer of analog chips, which help do things like regulate our computers’ voltage levels. These are small but vital parts of modern chip design that, according to some, are causing the holdup in production.

TSMC (Taiwan Semiconductor Manufacturing Company) is among the companies reported to be pointing the finger. TSMC is the world’s largest semiconductor company, and it makes chips for companies like Apple, AMD, Nvidia, and many others outside of the world of computing.

According to the report, TSMC says Texas Instruments, the market leader in these analog chips, is creating a bottleneck in providing access to these important components.

In a recent Reuters article covering the company’s quarterly earnings, the chief financial officer of Texas Instruments admitted that inventory levels were indeed low, causing the company to miss revenue goals.

While Texas Instruments’ production woes are likely not the sole “cause” of the global chip shortage, it’s a microcosm of how delicate supply chains can be. No one wants to take the blame, of course, but clearly the production shortage of a simple element can have wide-ranging effects on everything from the manufacturing of cars to your ability to buy an RTX graphics card or Playstation 5 this holiday season.

Some of the biggest voices in the industry have claimed the chip shortage could last throughout 2022 and even well into 2023.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Sony reportedly cuts PS5 production again as chip shortages and shipment issues bite

Sony’s PlayStation 5 may not be able to beat the PS4’s first year sales record due to an ongoing component shortage, according to Bloomberg. The company has reportedly cut its previous production forecast of 16 million down to 15 million, putting its target of 14.8 million PS5 sales by March in jeopardy, if the report is accurate. It also makes a bad situation worse in terms of consumers being able to pick up a PS5 over the holidays. 

Sony is supposedly having trouble with not just parts supply but shipping logistics as well, according to Bloomberg‘s sources. The problems are due in part to uneven vaccine rollouts in nations where Sony builds chips, and shortages of essential parts like power chips.

The situation has affected other console makers like Nintendo and even affected the launch of an entirely new console, Valve’s Steam Deck — pushing the date back until some time in 2022. It’s got to the point that publishers are reportedly saying that sales are gradually shifting over to PC versions of games due to a lack of consoles.

March is still a long ways off, so Sony might still be able to pull off the sales record goal. But it’s rather ominous that this report is arriving just ahead of Christmas, so if you’re looking for a PS5 as a gift and see an opportunity to get one, better snap it up quick. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Chip developer Cerebras bolsters AI-powered workload capabilities with $250M

Cerebras Systems, the California-based company that has built a “brain-scale” chip to power AI models with 120 trillion parameters, said today it has raised $250 million funding at a valuation of over $4 billion. Cerebras claims its technology significantly accelerates the time involved in today’s AI work processes at a fraction of the power and space. It also claims its innovations will support the multi-trillion parameter AI models of the future.

In a press release, the company stated that this additional capital will enable it to further expand its business globally and deploy its industry-leading CS-2 system to new customers, while continuing to bolster its leadership in AI compute.

Cerebras’ cofounder and CEO Andrew Feldman noted that the new funding will allow Cerebras to extend its leadership to new regions. Feldman believes this will aid the company’s mission to democratize AI and usher in what it calls “the next era of high-performance AI compute” — an era where the company claims its technology will help to solve today’s most urgent societal challenges across drug discovery, climate change, and much more.

Redefining AI-powered possibilities

“Cerebras Systems is redefining what is possible with AI and has demonstrated best in class performance in accelerating the pace of innovation across pharma and life sciences, scientific research, and several other fields,” said Rick Gerson, cofounder, chairman, and chief investment officer at Falcon Edge Capital and Alpha Wave.

“We are proud to partner with Andrew and the Cerebras team to support their mission of bringing high-performance AI compute to new markets and regions around the world,” he added.

Image of the Cerebras Wafer Scale Engine.

Cerebras’ CS-2 system, powered by the Wafer Scale Engine (WSE-2) — the largest chip ever made and the fastest AI processor to date — is purpose-built for AI work. Feldman told VentureBeat in an interview that in April of this year, the company more than doubled the capacity of the chip, bringing it up to 2.6 trillion transistors, 850,000 AI-optimized cores, 40GBs on-chip memory, 20PBs memory bandwidth, and 220 petabits fabric bandwidth. He noted that for AI work, big chips process information more quickly and produce answers in less time.

With only 54 billion transistors, the largest graphics processing unit pales in comparison to the WSE-2, which has 2.55 trillion more transistors. With 56 times more chip size, 123 times more AI-optimized cores, 1,000 times more high-performance on-chip memory, 12,733 times more memory bandwidth, and 45,833 times more fabric bandwidth than other graphic processing unit competitors, the WSE-2 makes the CS-2 system the fastest in the industry. The company says its software is easy to deploy, and enables customers to use existing models, tools, and flows without modification. It also allows customers to write new ML models in standard open source frameworks.

New customers

Cerebras says its CS-2 system is delivering a massive leap forward for customers across pharma and life sciences, oil and gas, defense, supercomputing centers, national labs, and other industries. The company announced new customers including Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing center at the University of Edinburgh, Tokyo Electron Devices, GlaxoSmithKline, and AstraZeneca.

A list of Cerebras's newest customers. The list of these customers can be found in the text of the article itself.

The series F investment round was spearheaded by Alpha Wave Ventures, a global growth stage Falcon Edge-Chimera partnership, along with Abu Dhabi Growth (ADG).

Alpha Wave Ventures and ADG join a group of strategic world-class investors including Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. Cerebras has now expanded its frontiers beyond the U.S., with new offices in Tokyo, Japan, and Toronto, Canada. On the back of this funding, the company says it will keep up with its engineering work, expand its engineering force, and hunt for talents all over the world going into 2022.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Nintendo lowers its Switch sales forecast due to global chip shortage

Nintendo has cut its Switch sales forecast due to ongoing semiconductor shortages, the company announced in its earnings report. It now expects to ship 24 million Switch units for the fiscal year ending March 31, 2022 instead of the 25.5 million units it had originally predicted. 

The issue came into focus this quarter, as Nintendo managed to ship just 3.83 million Switch consoles compared to 6.86 million during the same quarter last year. So far, its net sales for the year are down 18.9 percent to 624.2 billion yen ($5.46 billion) year-over-year. 

That’s not a huge surprise, however, as Switch and software sales exploded during the COVID-19 lockdown and following that has proved to be impossible — particularly as chips and components have since become more scarce. Today’s numbers don’t include any sales of the Switch OLED, as the earnings only provide data up to September 30th, a full week before the updated console arrived.

Despite the revised sales expectations, Nintendo expects to match total revenue of 1,600 billion yen ($14 billion) from the previous fiscal year, thanks in part to games. It aims to sell 200 million software units, 10 million more than last year, which would help offset the console sales drop. Upcoming titles include Pokémon Brilliant Diamond and Shining Pearl plus a Zelda-themed Game & Watch.

The most popular games so far this fiscal year include Legend of Zelda: Skyward Sword (3.6 million units sold), Mario Kart 8 Deluxe (3.34 million units) and Animal Crossing: New Horizons (2.22 million units). The launch of Metroid Dread came after the earnings period Nintendo is reporting today.

Despite the reduced expectations and tepid console sales this quarter, Nintendo has now sold 92.87 million Switch units to date. That’s still short of the Wii, which has the company’s current home console sales record of 101.63 million units shipped. However, if Nintendo comes close to matching the 11.57 million units sold during last year’s holiday period, the Switch — aided by the new Switch OLED model — could finally top that mark by the end of the year. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Amazon launches AWS instances powered by Habana’s AI accelerator chip

Amazon Web Services (AWS), Amazon’s cloud services division, today announced the general availability of Elastic Compute Cloud (EC2) DL1 instances. While new instance types generally aren’t particularly novel, DL1 (specifically DL1.24xlarge) is the first type in AWS designed for training machine learning models, Amazon says — powered by Gaudi accelerators from Intel-owned Habana Labs.

Developers including Seagate, Fractal, Indel, Riskfuel, Leidos were given early access to Gaudi running on AWS prior to today’s launch. “This is the first AI training instance by AWS that is not based on GPUs,” Habana wrote in a blog post. “The primary motivation to create this new training instance class was presented by Andy Jassy in 2020 re:Invent: ‘To provide our end-customers with up to 40% better price-performance than the current generation of GPU-based instances.’”

Cheaper model training

Machine learning is becoming mainstream as enterprises realize the business impact of deploying AI models in their organizations. Using machine learning generally starts with training a model to recognize patterns by learning from datasets, and then applying the model to new data to make predictions. Maintaining the prediction accuracy of a model requires retraining the model frequently, which takes a considerable amount of resources — resulting in increased expenses. Google subsidiary DeepMind is estimated to have spent $35 million training a system to learn the Chinese board game Go.

With DL1 — AWS’ first answer to Google’s tensor processing units (TPUs), a set of custom accelerator chips running in Google Cloud Platform — Amazon and Habana claims that AWS customers can now train models faster and with up to 40% better price-performance when compared to the latest GPU-powered EC2 instances. The DL1 instances leverage up to eight Gaudi accelerators built specifically to speed up training, paired with 256GB of high-bandwidth memory, 768GB of system memory, second-generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors, 400 Gbps of networking throughput, and up to 4TB of local NVMe storage.

When coming from a GPU- or CPU-based instance, customers have to use Habana’s SynapseAI SDK to migrate existing models. Habana also provides pre-trained models for image classification, object detection, natural language processing, and recommendation systems in its GitHub repository.

“The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their models,” AWS EC2 VP David Brown said in a statement. “AWS already has the broadest choice of powerful compute for any machine learning project or application. The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.”

In the June 2021 results from MLPerf Training, an industry benchmark for AI training hardware, an eight-Gaudi-system took 62.55 minutes to train a variant of the popular computer vision model ResNet and 164.37 seconds to train the natural language model BERT. Direct comparisons to the latest generation of Google’s TPUs are hard to come by, but 4,096 fourth-gen TPUs (TPUv4) can train a ResNet model in about 1.82 minutes and 256 TPUv4 chips can train a BERT model in 1.82 minutes, MLPerf Training shows.

Beyond ostensible performances advantages, DL1 delivers cost savings — or so assert Amazon and Habana. Compared with three GPU-based instances — p4d.24xlarge (which features eight Nvidia A100 40GB GPUs), p3dn.24xlarge (eight Nvidia V100 32GB GPUs), and p3.16xlarge (eight V100 16GB GPUs) — DL1 delivers an on-demand hourly rate of $13.11 when training a ResNet model. That’s compared to between $24.48 per hour for p3 and $32.77 per hour for p4d.

“Based on Habana’s testing of the various EC2 instances and the pricing published by Amazon, we find that relative to the p4d instance, the DL1 provides 44% cost savings in training ResNet-50. For p3dn end-users, the cost-saving to train ResNet-50 is 69%,” Habana wrote. “While … Gaudi does not pack as many transistors as the 7-nanometer … A100 GPU, Gaudi’s architecture — designed from the ground-up for efficiency — achieves higher utilization of resources and comprises fewer system components than the GPU architecture. As a result, lower system costs ultimately enable lower pricing to end-users.”

Future developments

When Intel acquired Habana for roughly $2 billion in December 2019, twilighting the AI accelerator hardware developed by its Nervana division, it looked to be a shrewd move on the part of the chip giant. Indeed, at its re:Invent conference last year, Jassy revealed that AWS had invested in Habana’s chips to expedite their time to market.

As an EETimes piece notes, cloud providers have been cautious so far when it comes to investing in third-party chips with new compute architectures for AI acceleration. For example, Baidu offers the Kunlun while Alibaba developed Hanguang. Chips from startups Graphcore and Groq are available in Microsoft’s Azure cloud and Nimbix, respectively, but prioritized for customers “pushing the boundaries of machine learning.”

The DL1 instances will sit alongside Amazon’s AWS Trainium hardware, a custom accelerator set to become available to AWS customers this year. As for Habana, the company says it’s working on its next-generation Gaudi2 AI, which takes the Gaudi architecture from 16 nanometers to 7 nanometers.

DL1 instances are available for purchase as on-demand Instances, with savings plans, as reserved instances, or as spot instances. They’re currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intel CTO Greg Lavender interview — Why chip maker is spending on both manufacturing and software

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Intel has been on a spending spree ever since Pat Gelsinger returned to the company as CEO earlier this year. He pledged to spend $20 billion on U.S. factories and another $95 billion in Europe. Those expenses are scary to investors as they could take a toll on the chip giant’s bottom line, but Gelsinger said he hopes they will pay off over four or five years.

And Intel is making investments in other ways too. In June, Gelsinger brought aboard Greg Lavender, formerly of VMware, as chief technology officer and senior vice president and general manager of the Software and Advanced Technology Group.

I spoke with Lavender in an interview in advance of the online Intel Innovation event happening on October 27-28. In that event, a revival of the Intel Developer Forum that Gelsinger used to lead years ago, Intel will re-engage with developers.

The event will highlight not only what Intel is doing with its manufacturing recovery (after multiple years of delays and costly mistakes). It will also focus on software, such as Intel’s oneAPI technology. Lavender is tasking Intel’s thousands of software engineers to create more  sophisticated software that help brings more value with a systems-focused approach, rather than just a chip-based approach.

Webinar

Three top investment pros open up about what it takes to get your video game funded.


Watch On Demand

We talked about a wide variety of subjects across the spectrum of technology. Here’s an edited transcript of our interview.

Above: Greg Lavender is CTO of Intel.

Image Credit: Intel

VentureBeat: Tell me more about yourself. This seems like a very different role for you.

Greg Lavender: I’ve been in the technology industry for a long time, working for hardware companies like Sun and Cisco. In the early days I was a network software engineer for 25 years, writing system software. Always working close to the metal. I have graduate degrees in engineering and computer science. We all get the same courses on Maxwell’s electromagnetic theory and physics. I’m a math geek. But I came up with the growth of the industry, right? Pat is three months older than me. Our careers have kind of tracked along. We’ve both known each other for not quite 14 years.

VentureBeat: What is the task that [Intel CEO] Pat Gelsinger gave you when he brought you aboard?

Lavender: We’ve known each other since I was running Solaris engineering. He was CTO at Intel. Intel launched the Nehalem platforms, if you remember back when that was their first server CPU. We were only shipping AMD Opteron, dual socket, dual core boxes at the time. Pat gave us some money to port it over to the Intel CPU chipsets. We got to know each other and built a trust relationship there. He obviously hired me into VMware and continued that relationship. He knows I’ve got that hardware and software background.

He surprised me when he called me up. I understood the CTO part, but then he also said I’d be the SVP GM of the software group. I said, “How big is that software group?” He said, “Well, we don’t have a software group. We have fragmented parts of software across the company.” In my first 120 days, about how long I’ve been here, I ran a defrag, a disk defrag, and pulled the other 6,000 person software organization together. Everything from firmware to BIOS to compilers to operating systems, all the Linux, Windows, Chrome, Android. All of our system software, all the security software.

I have a big team now. There’s other parts of software going on in the company, but I’m in the driver’s seat for the software strategy and ensuring the software quality for every hardware product we ship.

Above: Intel is focusing on oneAPI to make software creation easier.

Image Credit: Intel

VentureBeat: Is this a smaller percentage of the staff than it would have been in different years? There were things like Intel Architecture Labs and some of the investments that happened in the last decade way outside the chip space. Has that narrowed down again to a smaller percentage of the overall employees?

Lavender: We have a lot, and I’m hiring more. But I’d just say that Pat came in with his eight years at VMware. I was there for half of that. It’s a real software mindset, that the value of software is enabling the open source software ecosystem. Maybe we don’t need to directly monetize our software, right? We can monetize our very diverse platforms.

I’ve spent most of my time here pushing changes into the new compiler system. We just delivered the AMX accelerator code into the Linux kernel, so that when Sapphire Rapids comes out next year we already have the advanced matrix multiplier for machine learning and AI workloads in the Linux kernel. I have a compiler team — I’m sure you’re familiar with the LLVM compiler ecosystem, where all of our new compilers are built on LLVM. We can accelerate our GPUs, CPUs, and FPGAs. It’s a massive set of IP, and it’s IP we give away for free to enable our platforms. We’re contributing to PyTorch, TensorFlow, ONNX. We just updated Intel acceleration into TensorFlow 2.6. That had 8 million downloads just in Q3. We’re enabling the ecosystem for all the developers out there with these accelerated capabilities. We have our crypto library using OpenSSL, accelerated crypto as software.

I think Intel has just failed to tell everyone about all the cool stuff we’re doing. We talk about our chips and our hardware and our customers. We don’t talk about all this great software. We’ve pulled it all together into my org. And I have Intel Labs, 700 researchers at Intel Labs, with all our future software and AI and ML, as well as our quantum computing group. We have this neural computing chip. We just taped out the second version of it. We open-sourced the programming environment for it, called Lava. There were some articles about Loihi 2. That’s our neural processing chip.

VentureBeat: Is some of the investment in software more around the edges of what Intel does? Would that be harder, because there’s so much capital spending going into manufacturing now, with this recommitment to making sure the core manufacturing part of Intel was taken care of? Maybe that leaves less money for software investment.

Lavender: Our view is we need to prime the ecosystem. We need to be open, be trusted. We need to practice responsible AI in all the things we do with our software. My goal is to meet the developers where they are. Historically Intel wanted to capture the developers. I want to enable them and set them free, so that they have choice.

You may be familiar with the SYCL open source programming language, data parallel C++. It’s an extension to C++ for programming GPUs and FPGAs. We have a SYCL compiler built on LLVM. We make that freely available through our oneAPI ecosystem. We have a new website coming online next week, developer.intel.com, where you’ll find all these things. We’ve just been poor about letting the world know about what those investments have already paid for and delivered. Developers would be shocked to know how much of the open source technology they’re currently using has Intel free software in it. It gives them both a better TCO for running their workloads in the clouds, as well as the datacenters or on their laptops.

If anything is lacking, it’s efficient amplification and communication. Just telling everybody, “This is already here.” From my perspective, I just have to leverage it and go further up the stack. We’ve mostly just pushed out software that enables and tickles the hardware. But we’ve been quietly, or relatively quietly, sprinkling all of these accelerated capabilities in all the common open source environments. I mentioned PyTorch. We just don’t talk about it. What I have to change is marketing and communication. We’re going to do this at Intel.

That’s one of the major themes: engaging with the developer community and getting them access to all this cool technology so that they can choose which platforms they want to run on and get that enablement for free. They don’t have to do anything. Maybe set a flag or something. But they don’t have to do any new coding. As you well know, most developers — of 24 million developers, according to some recent data — are up the stack. If you look at the systems people, there’s maybe 1 million. There’s this big group of people in the middleware layer, the dev sec ops people. Maybe not the no-code/low-code developers, the top of the stack. But there are four million enterprise developers just on Red Hat. The fact that I’m pushing stuff into the new compiler ecosystem, pushing stuff into the Linux kernel, into Chrome, means all that technology will be there for all those enterprise developers. I can instantly enable 4 million developers for Sapphire Rapids or Ponte Vecchio GPU.

Intel's Ponte Vecchio is an almagation of graphics cores.

Above: Intel’s Ponte Vecchio is an amalgamation of graphics cores.

Image Credit: Intel

VentureBeat: If you think of things that Intel is getting back to, that maybe it used to do when it communicated through things like the Intel Developer Forum, are there things you expect will be reminders of that?

Lavender: Intel Developer Forum was one of the best tech conferences back when I was at Sun and Cisco. I think it stopped in, what, 2013? Intel Innovation is essentially a relaunch of that theme. “The geek is back,” as Pat would like to say. We were just rehearsing our dialogues for next week. I love it. We’ve grown up together in the industry. I was originally an assembly language programmer on the 8088 and the 8086. Pat and I cut our teeth on Intel as young kids. It’s just so great to be here together at this time given some of Intel’s missteps in the past. We’re in the driver’s seat, and we’re going to steer this massive company into the future.

All those investments we’ve talked about into our fabs and our foundry services business are part of the overall game plan. But if we build all these chips and then don’t have software to make it sing, what good is that? The software is what makes the hardware sing.

VentureBeat: What are some of the messages for people about how Intel has gotten over those missteps in things like the manufacturing process?

Lavender: Pat’s already been out communicating on that and what he’s doing, putting the company’s balance sheet to work to address the world’s lack of capacity to support the demand for semiconductor technologies. When we broke ground in Arizona three weeks ago, there was a lot of press around that. I think you covered Intel Accelerated, where we discussed Ponte Vecchio and how it will use our new process technology, even using TSMC tiles for the Ponte Vecchio general-purpose GPU. We’ve been adopting the new processes we’ve talked about. We’re getting the yields we need. We’re highly optimistic that the industry demand for semiconductor technologies will make IFS a strong business for us. My team, by the way, develops all the pre-silicon simulation software that IFS customers can use to simulate the functionality of their chip before they send it for tape-out.

VentureBeat: I’ve written a few stories from Synopsys and Cadence about how much AI is going into chip design these days. I imagine you’re making use of all that.

Lavender: Being CTO, I get to look across the whole company. That’s one of the advantages of being CTO. I spend a lot of time with the people in our process technology. They’re leading adopters of AI and ML technology in the manufacturing process, both in terms of optimizing yield from each wafer — wafers are expensive and you want to get the most out of every wafer — and then also for diagnostics, for defects.

Every company has silent data errors as a result of their manufacturing processes. As you get to lower and lower nanometer, into angstroms, the physics gets interesting. Physics is a statistical science. You need statistical reasoning, which is what AI and ML are really about, to help us make sure we’re reducing our defects per million, as well as getting the densities we want per wafer. You’re right. That’s the data to physics layer. You have to use machine learning and inference. We have our own models for that, about how to optimize that so we’re more competitive than our competitors.

Above: Intel CEO Pat Gelsinger breaking ground on chip production.

Image Credit: Intel

VentureBeat: If we go back in history some, Nvidia’s investments in Cuda were interesting for breaking the GPU out of its straitjacket, loosening it up for AI. That led to many changes in the industry. Does Intel have its own version of how you’d like to have something like that happen again?

Lavender: There’s at least three parts to that in the way I think about it. Everyone’s interested in roofline performance. Those are the bragging rights in the industry, whether it’s for a CPU or a GPU. We’ve released some preliminary ML performance numbers for Ponte Vecchio. I think it’s on the 23rd of this month that we’ll be submitting additional ML performance numbers for Xeon into the community for validation and publication. I don’t want to pre-announce those, but wait a couple of days.

We’re continually making progress on what we’re doing there. But it’s really about the software stack. You mentioned Cuda. Cuda has become the de facto standard for programming the GPU in the AI and ML space, not just for gaming. But there are alternatives. Some people do OpenCL. Are you familiar with SYCL, the open source effort for data parallel C++? All of our oneAPI compilers compile for CPU, for Xeon and our client CPUs, for GPU and FPGAs, which are also going into network accelerators particularly. If you want to program in C++ with the SYCL extensions, which are up for standardization in the ISO C++ standards bodies, there’s a lot of effort going into writing SYCL as an open source, industry neutral technology. We’re supporting that for our own platforms, and we’d like to see more adoption across the industry.

I’m sure you’re familiar with AMD announcing their HIP, this thing called a heterogeneous programming environment, which is essentially — think of it as a source-to-source translation of Cuda into this HIP syntax for running on their own CPU and GPU. From Intel’s perspective, we want to support the open source community. We want open standards for how to do this. We’re investing, and we’re going to support the SYCL open source community, which is the Khronos Group. We think that provides a more neutral environment. In fact, I’m told you can program SYCL on top of Nvidia GPUs.

That’s sort of step two, once you get competitive at the GPU level. Step three is, what’s the ecosystem that’s already out there? There’s lots of ISVs that are already in these spaces like health care, edge computing, automotive. Everybody wants choice. Nobody wants proprietary lock-in. We’re going to pursue the path of presenting the market and the industry and our customers with choice.

VentureBeat: How open do you want to be? That’s always a good question.

Lavender: We’ll announce this more specifically at Intel Innovation, but the oneAPI ecosystem we’ve talked about — in some sense, the oneAPI name doesn’t mean there’s one single API. It’s really just a brand name. We have more than seven different vertical toolkits for building various things with the technology. We have more than 40 components — toolkits, SDKs, and so on — that make up the oneAPI ecosystem. It’s really an ecosystem of Intel accelerated technologies, all freely available. We’re doing the oneAPI release. We’re accelerating everything from crypto to codecs to GPUs to FPGAs to CPUs — x86 CPUs, obviously, but not necessarily ours. You can use those tools on AMD if you choose.

Our view is to provide the toolkits out there, and we’ll compete at the system level together with our customers, our partners. We’ll enable all the ISVs. It’s not just the open source. We’ll enable the ISVs to use those libraries. It enables anybody doing cloud development. It enables those 4 million enterprise developers on Red Hat. Just enable everybody. We all know about how software eats the world. The more software that’s out there, in the end, cloud to edge — ubiquitous computing, we call it — that enables the advancement of society, the advancement of culture, the advancement of security.

We’re big on pushing our security features in our hardware through those software components. We’re going to get to a more secure world with less supply chain risk from hackers. Even now, machine learning models are being stolen. People spend millions of dollars to train these things, develop these models, and when they deploy them at the edge people are stealing them, because the edge is not secure. We can use all the security features like SGX and TDX in our hardware to create a security as a service capability for software. We can have secure containers. We pushed an open source project called Kata Containers that gets security from our trusted extensions and our hardware through Linux.

The more we can deliver the value of those innovations in our hardware — that most people don’t know about — through the software stack, then that value materializes. If you use Signal messenger for your communications, did you know that Signal’s servers run on Intel hardware with SGX providing a secure enclave for your security credentials, so your communications aren’t hacked or viewed by the cloud vendors? Only Signal has access to the certificates. That’s enabled by us running on Intel hardware in the cloud. The CTO of Signal will be on stage with me as we talk about this, along with the CTO of Red Hat. The CTO of Signal did his undergraduate honors thesis under me on secure anonymous communication over the internet in 2002. I’m really proud of my student and what he’s done.

Greg Lavender came to Intel in June from VMware.

Above: Greg Lavender came to Intel in June from VMware.

Image Credit: Intel

VentureBeat: How do you think about something like RISC-V?

Lavender: It shows that innovation is ever-present and always occurring. RISC-V is another set of technologies that will be adopted particularly, I would think, outside the United States, as in Europe and China and elsewhere in Asia people want alternatives to ARM for their own reasons. It’ll be another open architecture, open ecosystem, but the challenge we have as an industry is we have to develop the software ecosystem for RISC-V. There’s a massive software system that’s evolved over a decade or more for ARM. Either we co-opt that software ecosystem for RISC-V, or a new one emerges. There’s appetite for both, I think. There’s already investment in ARM, but at the same time there’s potential to develop something that’s not tied to the ARM environment.

There are differing opinions. I’ve heard from various people about the opportunity for RISC-V. But clearly it’s happening. I think it’s good. It gives more choice in the industry. Intel will track and see where it goes. I generally believe that it’s a positive trend in the industry.

VentureBeat: As far as what people can expect next week, when it was in person there were so many different kinds of options for deep dives. I guess you may have even more options when you’re doing it online. How would you compare this experience to what people might remember from before about Intel Developer Forum?

Lavender: It’s going to be very interactive, with Pat and myself, Sandra Rivera, Gregory Bryant for the client side, Nick McKeown. Sandra, myself, and Nick are all new in our roles, around 100-plus days. It’s going to be a lively conversation style. I forget the total number, but we have more than 100 “meet the geek” demos. We’ll have some cool stuff, everything from 5G edge robotics to deep learning, AI, ML, obviously graphics. We’re going to show off our new Alder Lake processor. Lots of stuff about various open source toolkits we’ve launched. You may not have heard of iPDK. It’s an open source project we launched. A lot of people are jumping on the bandwagon to offload workloads that traditionally run on the cores to the smart NIC. We have some partners that will be showing up to talk about our technology and how they’re using it.

It’s only a two-day event, but there’s a lot of material packed into those two days. It’s a video format. You can browse around and pick and choose what you want. I think we’re all fatigued of these virtual conferences. We’re trying to make it not just a bunch of talking heads, but more of an interactive dialogue about things we’re doing, about our customers and how they’re taking advantage of it, and then quickly transitioning to live or recorded demos to show that it’s real. It’s not just marketing. It’s real.

VentureBeat: Does this sort of thing make you wish the metaverse was here, that we could make it happen faster?

Lavender: There’s this whole sociological, anthropological conversation to have about the transition we’ve all been through for the last two years. For me, I worked in banking, so I’ve learned to think like a global economist. You can’t help but do that when you’re CTO of a global financial company. I look at these things at more of the macroeconomic level in terms of the likely societal changes. Clearly the shortages in the supply chain and the chokes in the supply chain have shown the insatiable demand for technology generally. Everything we’re doing now is technology-enabled. Can you imagine if we didn’t have Zoom, Teams, whatever? What would that have been like? Obviously this is something in the human experience. We’ve all experienced that.

Above: Intel has 6,000 software engineers.

Image Credit: Intel

But without a doubt, the demand for semiconductors, the demand for software will outstrip the talent, the global talent we have to produce it. We have to get economies of scale. This is where Intel has an advantage. We have those economies of scale more than anyone. We can satisfy more of that demand, even if we have to build factories. We have to accelerate all of that with software. This is why there’s a software-first strategy here. If we’re talking five years from now, it could be a very different story, because the company is putting its mojo back into software, particularly open source software. We’re going to continue to deliver a broad portfolio of technologies to enable that global demand to be met in multiple verticals. We all know software is the liquid. It’s the lubricant that enables that technology to add social and economic value.

VentureBeat: Does it look like 2023 is when the supply chain gets back to its healthier self?

Lavender: I read the same press you read. It seems like it’s a two-year cycle to get there. I’ve read stories about people building their own containers to take over on a ship and collect the parts to bring back. Walking supplies through customs in various countries to get it through the process and the bureaucracy. Right now it seems like a lot of unusual things are happening. I’ve even heard about people receiving SOC packages and they go to test them and there’s actually no guts inside the SOC. That hasn’t happened to us, but these are the stories I’ve read about in the press.

VentureBeat: I would hope that the U.S. government comes around and sees the need to invest in bringing a lot of this back onshore.

Lavender: The CHIPS Act — I’m sure you’re familiar with that. It’s passed the Senate. It hasn’t yet passed the House. I think it’s tied up in the politics of the current spending bill. The Biden administration is trying to put it through. Obviously we’re supporters of that. It’s as good for the industry as it is for Intel. But your guess is as good as mine about geopolitics. It’s not an area that I have any expertise in.

VentureBeat: As far as some futuristic things, I wonder if you’ve thought about some things like Web 3 and the decentralized web, whether that may come to pass or whether it needs certain investments across the industry to happen.

Lavender: There’s a lot of talk. We all think that the datacenter of the future — you may have heard us talk about going from exascale to zettascale. When you get to those scales, to zettascale, it becomes a communications issue. We’ve invested and pioneered in silicon photonics. We can get latencies over distances to a millisecond. That’s quite a distance you can travel at the speed of light.

First off, the innovations in core networking and the edge — it’s not just 5G. I have a new Nighthawk modem from Netgear. I get 400 megabits download. It cost me 800 bucks for that device, but if you’re on a good 5G network, you see the value of it. We’re going to be close to gigabit before too much longer. 6G is going to give you much more antenna bandwidth as well. The bandwidth has to go there before all the other compute density distributes.

I think what you’re talking about is workloads moving not necessarily to the cloud, but away from the cloud and more to the edge. That’s certainly a trend. We see that in our own business and our own growth, in demand for FPGAs and our 5G technologies. Compute becomes ubiquitous. That’s what we’ve said. Network connectivity becomes pervasive. And it’s software-controlled. There has to be software to manage that level of distribution, that level autonomy, that level of disaggregation.

Humans aren’t good about building distributed control planes. Just look at what goes on today. The security architecture that has to overlay all of that — you’ve created a massive surface area for attack vectors. Again, here at Intel we think about these things. We have the capacity and the manufacturing capability to start building prototype technology. I have Intel Labs. That’s 700 researchers. Those are areas we’re discussing as we look at our funding for the next fiscal year, to start exploring these distributed architectures. But most important, back to the software story — I can build the hardware. We can do that. It’s about how you actually manage that at zettascale.

Above: Intel is taking a systems approach to software.

Image Credit: Intel

VentureBeat: You must be happy that Windows 11 has that hardware security feature built in. I think some of these game companies are starting to realize that ring zero access for things like anti-cheat in multiplayer games is important.

Lavender: Windows 11 requires TPM. I have an old Intel NUC that I use for programming. I’ve tried to upgrade to Windows 11 and it told me I needed to buy a new one because I didn’t have the Trusted Platform Module. I asked my colleagues here when the next NUC is coming out. I don’t want to get the currently shipping one. I want one with the new chips. So I’m in line for a beta box.

I just got put onto the Open Source Security Foundation, along with the CTOs of VMware and Red Hat and HPE and Dell. We’re really going to tackle this problem for the industry in that form. From my platform at Intel as the CTO, I want to engage with all my ecosystem partners so that we solve this problem as an industry. It’s too big a problem to solve one-off.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link