Categories
Game

Microsoft’s virtual Xbox museum is a very detailed stroll down memory lane

If you haven’t heard by now, the Xbox brand turned 20 this year. With anniversary livestreams, controllers, and even a surprise Halo Infinite multiplayer release, we’re not sure how you could have missed the news, but that’s neither here nor there. The anniversary train hasn’t stopped rolling yet, as Microsoft has launched a new virtual museum that takes us through the history of Xbox.

From 1990s concept to present day

At first blush, a virtual museum celebrating 20 years of Xbox might sound a bit self-indulgent, but it’s well worth visiting for any Xbox fans out there. The browser-based museum starts you right at the beginning of the Xbox’s history, when Microsoft’s DirectX team began developing the Xbox as a competitor to the upcoming PlayStation 2.

From there, we’re taken through many of the significant events in Xbox history, looking at the development and reveal of the first console and the subsequent launches of other consoles that comprise the Xbox family. It isn’t just console releases that the museum covers, as big events like the launch of Kinect and Microsoft’s acquisition of Mojang are included in the museum. We also get a look at some of the stumbles in Xbox history, with the museum covering the Xbox 360’s “Red Ring of Death” problem, too.

Visitors to the museum get to use avatars to run through a digital track that takes them through the history of each console. There’s also a separate museum for Xbox’s biggest franchise, Halo, which shows all of the major happenings in that franchise alongside Xbox history. You might want to set aside some time over the upcoming holiday weekend to explore the museum, as seeing every exhibit and watching every video will take quite a while.

A quick note: we’ve tried visiting the Xbox museum in both Chrome and Edge, and for us, at least, the museum runs much more smoothly in Edge. Perhaps that’s not a coincidence, but, in any case, if you have Edge installed on your machine, you might want to start by using that browser.

The biggest exhibit is you

While the trip down Xbox memory lane is cool, the virtual museum also recaps the Xbox histories of the players visiting. Logging into your Microsoft account will show you statistics on your years with Xbox, dating all the way back to the original Xbox (assuming you actually connected a LAN cable to it and signed into the early iteration of Xbox Live).

For instance, even though I had an original Xbox back in the day, I never connected it to the internet, so as far as Microsoft is concerned, my first Xbox console was the Xbox 360. The first Xbox game Microsoft has a record of me playing is Halo 3, and my first sign-on to Xbox Live was on October 2nd, 2007.

These statistics go pretty deep, showing you the first time you logged in on each Xbox console throughout the years, the first game you played on each of those consoles, and even the first time you played your most-played Xbox game of all time (for me, that date is September 25th, 2010 and the game in question is Halo: Reach).

The virtual Xbox museum is a very fascinating trip, and it’s something that all Xbox users should check out, if for no other reason than to see their history with the consoles.

Repost: Original Source and Author Link

Categories
AI

Astera Labs announces memory acceleration to clear datacenter AI/ML bottlenecks

Astera Labs today announced key advancements to clear up performance bottlenecks in enterprise datacenters caused by the massive data needs of AI and ML applications.

Timed to coincide with Supercomputing21, a conference for high-performance computing that happens this week, the company is launching what it claims is the industry’s first memory accelerator platform based on the Compute Express Link (CXL) standard for interconnecting general purpose CPU processors and various other datacenter devices.

The news is significant because clearing bottlenecks in datacenters has become a holy grail for the major vendors of processors. Their customers are struggling with performance, bandwidth, and latency issues as they piece together different types of processors like CPUs, GPUs, and AI accelerators that are required to drive powerful applications like AI.

By combining its existing Aries product (for PCIe retimers) with the newly announced Taurus (for smart cables) and Leo SoC (for CXL memory accelerators), Astera Labs says it can become the leading cloud connectivity provider and (more than) double its revenue annually to address the $1 billion pipeline opportunity it sees, with an overall estimated total addressable market of $8 billion by 2025, which is being fueled by the growth of AI.

The goal is to create a faster connectivity backbone that provides low-latency interconnects, shares resources, and stays efficient with tricky technologies like cache. Also, Astera Labs says its fully cloud-based approach provides significant advantages in design productivity and quality assurance.

Feeding data to memory accelerators

One of the persistent challenges in computing is to ensure that CPUs and other accelerators can be fed data. This has become a major issue given the explosive growth of AI, where model sizes have doubled in as little time as every three and a half months. In recent years, DRAM scaling has not kept up with Moore’s law, which means memory is becoming a more limiting and costlier factor than compute. The CXL protocol, based on standard PCIe infrastructure, is an alternative to the standard DIMM slot for DRAM. It can also be used to attach accelerators to the CPU.

Intel proposed the CXL standard in 2019, and its industry adoption is targeted to coincide with PCIe 5.0 in 2022. Compared to PCIe 5.0, CXL adds multiple features such as cache coherency across CPU and accelerators and also has a much lower latency. In the future, CXL 2.0 will add rack-level memory pooling, which will make disaggregated datacenters possible.

Astera Labs already has some products that are used by cloud service providers, such as PCIe and CXL retimers, but is aiming to expand this portfolio with these new announcements.

Memory accelerator for CXL 2.0

Leo, which Astera calls the industry’s first memory accelerator platform for CXL 2.0, is designed to make it possible for CXL 2.0 to pool and share resources (memory and storage) across multiple chips in a system — including the CPU, GPU, FPGA, and SmartNIC — and make disaggregated servers possible. Leo further offers built-in fleet management and diagnostic capabilities for large-scale server deployments, such as in the cloud or enterprises.

“CXL is a game-changer for hyperscale datacenters, enabling memory expansion and pooling capabilities to support a new era of data-centric and composable compute infrastructure,” Astera Labs CEO Jitendra Mohan said. “We have developed the Leo SoC [system on a chip] platform in lockstep with leading processor vendors, system OEMs, and strategic cloud customers to unleash the next generation of memory interconnect solutions.”

CXL consists of three protocols: CXL.io, CXL.cache, and CXL.memory. However, only the implementation of CXL.io is mandatory. For the artificial intelligence use case of a cache-coherent interconnect between memory, the CPU, and accelerators such as GPUs and NPUs (neural processing units), the CXL.memory protocol is relevant. Although the latency of CXL is higher than a standard DIMM slot, it is similar to current (proprietary) inter-CPU protocols such as Intel’s Ultra Path Interconnect (UPI). Because one of the goals of CXL 2.0 is to enable resource pooling at the rack-scale, the latency will be similar to today’s solutions for internode interconnects. CXL.memory further supports both conventional DRAM and persistent memory, in particular Intel’s Optane.

The Leo SoC memory accelerator platform positions Astera to play a critical role to support the industry in adopting CXL-based solutions for AI and ML. Because CXL is based on PCIe 5.0, Leo supports a bandwidth of 32 GT/s per lane. The maximum capacity is 2TB.

“Astera Labs’ Leo CXL Memory Accelerator Platform is an important enabler for the Intel ecosystem to implement a shared memory space between hosts and attached devices,” Jim Pappas, director of technology initiatives at Intel, said.

“Solutions like Astera Labs’ Leo Memory Accelerator Platform are key to enable tighter coupling and coherency between processors and accelerators, specifically for memory expansion and pooling capabilities,” Michael Hall, director of customer compatibility at AMD, agreed.

Inside CXL

Digging a bit deeper into CXL, the Intel-proposed standard was the last one for a cache-coherent interconnect to be announced. For example, Arm was already promoting its CCIX standard, and various other vendors were working on a similar solution in the Gen-Z Consortium. However, with the absence of Intel — still the dominant vendor in the datacenter — in these initiatives, they gained little traction. So once Intel proposed CXL as an open interconnect standard based on the PCIe 5.0 infrastructure, the industry quickly moved to back the CXL initiative, as Intel promised support in its upcoming Sapphire Rapids Xeon Scalable processors.

Within six months of the CXL announcement, Arm announced that it, too, would move away from its own CCIX in favor of CXL. Earlier this month, the Gen-Z Consortium announced that it had signed a letter of intent (following a previous memorandum of understanding) to transfer the Gen-Z specifications and assets to the CXL Consortium, making CXL the “sole industry-standard” going forward.

Other vendors have already announced support. In 2021, Samsung and Micron each announced that they would bring DRAM based on the CXL interconnect to the market. In November, AMD announced that it would start to support CXL 1.1 in 2022 with its Epyc Genoa processors.

Outside of CXL

Astera also announced Taurus SCM, which pertains to smart cable modules (SCM) for Ethernet. These “smart cables” serve to maintain signal integrity as bandwidth doubles in 200G, 400G, and 800G Ethernet (which is starting to replace 100GbE) in 3m or longer copper cables, and they support latencies up to 6x lower than the spec. Other smart features include security, cable degradation monitoring, and self-test. The cables support up to 100G-per-lane serializer-deserializer (SerDes).

Astera Labs is an Intel Capital portfolio company. The startup is partnering with chip providers such as AMD, Arm, Nvidia, and Intel’s Habana Labs, which have also supported the CXL standard. In September, the company announced a series C $50 million investment at a $950 million valuation.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Memory Leak Bug Is Killing MacOS Monterey Performance

Digital Trends may earn a commission when you buy through links on our site.

Apple’s newest desktop operating system, MacOS Monterey, brings a handful of useful new features, but an assortment of issues as well. Some people are reporting memory leaks after upgrading to MacOS Monterey — some of which have even included warnings that the entire system has run out of memory.

While new operating system rollouts tend to have a few bugs, this one seems particularly bothersome. Memory leaks occur when an application uses more memory, or RAM, than is necessary. This happens because the process in question doesn’t release the memory that’s allocated to it after it’s closed and continues to use more memory, sometimes until there’s none left.

There have been a number of complaints across multiple forums, including Apple’s own support forums, Reddit, and Twitter. YouTuber Gregory McFadden tweeted a picture in which Control Center was using a whopping 26GB of RAM. By comparison, Final Cut Pro was only using 6GB of RAM,  and that’s a full-fledged professional video editing program. Control Center normally only uses a couple of megabytes of RAM.

The issue doesn’t seem to be limited to a particular Mac model either. Users with M1, M1 Pro/Max, and Intel versions have all reported memory leaks. One Firefox user with an Intel Mac reported Firefox usage of almost 80GB of RAM. While some users like Gregory McFadden had upwards of 64GB of RAM installed, a lot of others will likely have much lower RAM and will feel the pinch of a memory leak more acutely.

So glad I got 64GB of memory on my new Mac so I can use 26GB of it for control center… Wait… what. pic.twitter.com/inCOPaii1o

— Gregory McFadden (@GregoryMcFadden) October 28, 2021

This isn’t the only major issue with MacOS Monterey. Those with older Macs who install the new operating system are at risk of bricking their computer. Many of the users reported Macs that simply wouldn’t turn on at all after upgrading. While there does seem to be a temporary fix, that requires access to another Mac.

Lest the Windows faithful get cocky, Windows 11 users have also reported memory issues. Windows Insiders found that File Explorer consumes memory even after being closed. We were able to reproduce the leak on both Windows 11 and Windows 10. Fortunately, it seems this is limited to just the File Explorer and not random programs like MacOS’ issue.

Regardless, the memory leak on MacOS Monterey could just be the teething signs of a new operating system. Apple will hopefully issue a patch to fix the leak, although MacOS memory leaks seem to be a common occurrence. At any rate, it may be worth holding off upgrading your Mac for now.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Switch OLED CPU and memory detailed, but don’t get too excited

It’s been a pretty big day for Nintendo, but also one with a fair amount of confusion as well. Nintendo announced the Nintendo Switch OLED earlier today, a new version of its Switch console that comes with a few different upgrades. Some Switch fans have long been expecting Nintendo to reveal the Switch Pro, which has been the subject of a number of rumors over the past several years, but unfortunately, it seems that the Switch OLED is not the hybrid console those folks were looking for.

The Switch OLED does come with a few of the upgrades that were rumored for the Switch Pro. As the name reveals, this new console comes with an OLED display, just as the Switch Pro was rumored to. That display is also bigger than the one on a standard Switch, clocking in at 7 inches instead of 6.2 inches; another checked box for the rumor mill on that front.

One key upgrade many rumors claimed was that of an improved CPU. However, Nintendo’s spec sheet for the Switch OLED suggested that the processor won’t be changing, saying only that the console would use an “NVIDIA Custom Tegra processor” – the same thing listed for the standard Switch’s CPU.

Now Nintendo has confirmed that the CPU won’t be changing, nor will the amount or type of RAM. “Nintendo Switch (OLED model) does not have a new CPU, or more RAM, from previous Nintendo Switch models,” a Nintendo representative told The Verge. So, if there was any question before, it’s now safe to say that the Switch OLED is not exactly the Switch Pro of legend.

Essentially, the Switch OLED is for those who want a better quality, slightly larger display than what’s offered on the standard Switch. Of course, there are other upgrades, such as 64GB of internal storage and better audio quality, but they probably aren’t worth upgrading from a standard Switch if you aren’t interested in the larger display. The Nintendo Switch OLED is out on October 8th for $349.99.

Repost: Original Source and Author Link

Categories
Computing

Intel Optane Memory H10 SSD Review: How it could speed up your next laptop

Intel’s Optane Memory H10 SSD is one of those enigmas of PC hardware that can drive reviewers crazy. It is—simply put—a storage technology that is more responsive in some cases, but slower in others.

It’s also a technology you can’t choose for yourself. Currently, Optane Memory H10 is being sold only to PC OEMs, who will integrate it into space-limited laptops and eventually full-on gaming laptops.

Because it’s Intel technology, it’s not going work with platforms it’s not approved for (read AMD). As you start seeing it in new laptops, this review will help you decide whether it’s a feature worth seeking out. 

img 20190416 141347 IDG

Intel’s Optane Memory H10 with Solid State Storage is essentially two drives in one one.

What is Intel’s Optane Memory H10?

Intel officially names this device “Optane memory H10 with solid state storage.” It’s much easier to think of it as a hybrid drive, or two drives in one. On one half of the M.2 stick, Intel has shoved 32GB of Optane memory. The rest of the M.2 is used to house a 512GB QLC-based NAND. 

optane h10 2 Intel

Intel’s Optane H10 with SSD is really another iteration of Optane.

Both are independent drives, each with dedicated x2 PCIe Gen 3 bandwidth. In fact, if you disable Optane in the Intel Rapid Storage Technology driver, both drives will appear as independent drives in Windows 10’s device manager. Used as expected, though, the drives will appear as a single drive.

optaneoff IDG

Turning off Optane acceleration allows Windows Device Manager to see the two different drives.

Why Optane Memory H10?

The idea behind Optane Memory H10 is to use Optane Memory technology to accelerate performance of a slower drive by storing frequently-used files on the Optane memory. The concept is already in place for traditional hard drives, but it’s new for an SSD.

What’s not clear is whether it makes sense. When we first reviewed Optane Memory two years ago, we found it to be pretty impressive for accelerating dog-slow hard drives. It also seemed pretty promising against dog-slow TLC (triple level cell)-based SSDs.

A lot has changed with SSDs, though. TLC drives have gotten a lot faster. The other big change is that denser QLC (quad-level cell) drives have stormed the PC. QLC packs more data into each chip, which generally means a sacrifice in performance. With the Optane Memory H10, Intel is hoping to boost the performance of QLC-NAND SSDs.

Repost: Original Source and Author Link

Categories
Tech News

Google Assistant Memory could be its most exciting upgrade yet

Some productivity gurus emphasize that the brain is for thinking, not for holding “stuff”. While some scientists will definitely contest that claim, it’s hard to argue that the average human does find it hard to juggle many thoughts and ideas in their head all at once. That’s what notebooks, both paper and digital, have been used for centuries and Google is apparently looking into turning its AI-powered Assistant to become the Memory for your digital brain.

Google Assistant may have started out almost like a glorified voice-controlled Google Search but it has been envisioned to be, well, your digital assistant. Just like human assistants, it is meant to take care of menial stuff, like keeping tabs on your todos, filing tickets and receipts, and reminding you of important dates. Google Assistant already does that and soon it will also keep your spur-of-the-moment ideas and references.

9to5Google’s APK teardown of the Google app revealed what is being called Assistant’s “Memory” feature. In a nutshell, this would allow users to dump almost anything they encounter both in the real world and on the Internet. That includes photos of objects, notes, and places as well as screenshots, links, and more. Even Google Assistant’s current Reminders will soon be located in this Memory.

Storing things in Assistant’s Memory is a simple act of a voice command or a home screen shortcut. Retrieving them is just as easy, with a feed-like tab in the app, showing the memories in reverse chronological order of cards with Today’s latest on top. The app will also offer shortcuts to certain categories like Important, Read Later, etc.

At the moment, Google Assistant Memory is being tested internally. As great as it sounds, there’s also a chance it never leaves Google’s doors in the end. That would be a shame, though, as this could be one of the most important features that the Assistant is getting in terms of actually helping put order in our chaotic modern lives.

Repost: Original Source and Author Link

Categories
Tech News

Chrome 89 promises less memory greed and a cooler, quieter Mac

Google claims to have tamed Chrome’s memory-hungry ways, with the Chrome M89 release of the browser reportedly making big cuts when it comes to system demands. Long a topic of frustration, Chrome’s hunger for RAM – as well as other Windows, Mac, and Android resources – has also long been a focus for Google’s Chromium engine team, and this time they say they’ve made significant improvements.

In Windows, for example, Chrome 89 is showing up to 22-percent memory savings in the browser process. That’s not all, either: the Chromium team claims it can also save 8-percent in the renderer, and 3-percent in the GPU.

Browser responsiveness, meanwhile, has improved by up to 9-percent. Google gathers those results using real-world user data, anonymously aggregated across Chrome clients from those who allow the company to get performance reports.

As for just how it did that, according to Product Manager Mark Chang, it’s down to a new memory allocator. Dubbed PartitionAlloc, it’s now being used in the 64-bit Windows version of Chrome – and the Android version of the browser – with Chang saying that the system has been optimized around low allocation latency, space efficiency, and security. Chrome should also be smarter about discarding memory, too, and can now reclaim up to 100MiB per tab by discarding memory the foreground tab isn’t actively using.

For instance, if you load a webpage with big images, Chrome 89 can discard those pictures as you scroll past them. It’s enough, Chang says, to save more than 20-percent of memory on some popular sites.

As for macOS, there Chrome 89 is catching up with versions for other platforms. By shrinking its memory footprint in background tabs, it’s apparently seeing up to 8-percent Amoy savings, or the equivalent of more than 1GiB. Macs will also benefit from improved tab throttling, leading to up to a 65-percent improvement on its Apple Energy Impact score for background tabs. The result, Change says, should be a cooler-running Mac and less need for active cooling with the fans.

Unsurprisingly, given Google’s focus on Android recently, Chrome on the mobile platform is getting some extra attention. Thanks to new Play and Android capabilities, Chang says, the Chrome team could repackage the browser. He claims “we’re seeing fewer crashes due to resource exhaustion, a 5% improvement in memory usage, 7.5% faster startup times, and up to 2% faster page loads.”

If you’ve got an Android Q device with 8GB of memory or more, you’ll be able to take advantage of Chrome rebuilt as a 64-bit binary. That should be more stable, Chang says, along with up to 8.5-percent faster at page loading, and 28-percent smoother for input latency and scrolling.

Those with less potent devices, meanwhile, could see an improvement, with Chrome 89 taking advantage of Android App Bundles to optimize APK size. That means the Play store can generate tailored downloads for the specific device, so that only the necessary code is used.

Chrome on Android should start faster, too. Freeze-Dried Tabs basically now saves a lightweight version of the tabs you have open, more akin to a screenshot in size, but still with working links and scroll/zoom support. By showing those initially – while the full page loads in the background – when Chrome starts up, the Android browser can be ready 13-percent faster.

Repost: Original Source and Author Link

Categories
Computing

Faster Memory: A.I. Helps Samsung Double HBM Speeds

High-bandwidth memory, or HBM, is already fast. But Samsung wants to make it even faster. The South Korean-based technology giant has announced its HBM-PIM architecture, which will double the speeds of high-bandwidth memory by leaning on artificial intelligence.

PIM, which stands for processor-in-memory, leverages the capabilities of artificial intelligence to speed up memory, and Samsung hopes that its HBM-PIM tech will be used in applications such as data centers and high-performance computing (HPC) machines in the future.

“Our groundbreaking HBM-PIM is the industry’s first programmable PIM solution tailored for diverse A.I.-driven workloads such as HPC, training, and inference,” Kwangil Park, Samsung Electronics senior vice president of memory product planning, said in a statement. “We plan to build upon this breakthrough by further collaborating with A.I. solution providers for even more advanced PIM-powered applications.”

A potential client for Samsung’s HBM-PIM is the Argonne National Laboratory, which hopes to use the technology to solve “problems of interest.” The lab noted that HBM-PIM addresses the memory bandwidth and performance challenges for HPC and AI computing by delivering impressive performance and power gains.

According to Samsung, the HBM-PIM works by placing a DRAM-optimized engine inside each memory bank in a storage subunit to enable parallel processing to minimize data movement.

“When applied to Samsung’s existing HBM2 Aquabolt solution, the new architecture is able to deliver over twice the system performance while reducing energy consumption by more than 70%,” the company stated. “The HBM-PIM also does not require any hardware or software changes, allowing faster integration into existing systems.”

This is different than existing applications, which are all based on the von Neumann architecture. In current solutions, a separate processor and separate memory units are needed to carry out all the data processing tasks in a sequential approach. This requires data to travel back and forth, often resulting in a bottleneck when handling large data volumes.

By removing the bottleneck, Samsung’s HBM-PIM can be a useful tool in a data scientist’s arsenal. Samsung claims that the HBM-PIM is now being tested by A.I. accelerators by leading A.I. solution partners, and validation is expected to be completed in the first half of 2021.

Editors’ Choice




Repost: Original Source and Author Link