Categories
Computing

Windows 11 Gains Back A Highly Requested Taskbar Feature

Microsoft has confirmed the weather widget that was initially introduced via Windows 10 will be integrated into Windows 11. A new voice access feature for the operating system has also been added.

The latest Insider preview build for Windows 11, dubbed . 22518, displays live weather content on the left side of the taskbar. Users will also be able to open the widgets board by hovering over the entry point.

Another addition to Windows 11’s latest preview build is voice access, which allows users to control several aspects of their PC and author text through voice commands.

The new feature lets you open and switch between apps, browse the web, and control the mouse and keyboard. For example, you can click an item like a button or a link by saying “click start” or “click cancel.”Similarly, you’ll be able to open an application by saying “open Edge” or “open Word.”

Other elements that can be interacted with through one’s voice include searching and editing text, as well as interacting with overlays. Microsoft provided a full list of commands for voice access, but pointed out that it only supports the English-U.S option in the display language section.

Microsoft has also made it easier for new users to install the Windows Subsystem for Linux through the Microsoft Store.

Microsoft is also introducing the Spotlight collection to Windows 11, which will “keep your desktop fresh and inspiring.” New desktop pictures from around the world will be offered every day, accompanied by various facts pertaining to the picture itself.

Besides Insider preview build 22518, Microsoft recently released a redesigned Notepad. An updated user interface brings changes like rounded corners, but the most exciting inclusion is a dark mode component. The new-look Notepad also addresses a “top community feature request” by adding support for multilevel undo.

There’s currently no timeline for when all these changes will become available for all Windows 11 users, but expect a rollout sometime during 2022. As for the latest preview build that’s been released in the Windows 11 Dev Channel, Microsoft stated that it won’t be offered to ARM64 PCs due to an issue that it’s currently working to fix.

Regarding Microsoft’s plan for Windows 11 in 2022, one area of focus for the company largely revolves around performance, with the tech giant highlighting how improving the responsiveness of the new operating system will be a priority.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Nvidia benchmark tests show impressive gains in training AI models

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Nvidia announced that systems based on its graphics processor units (GPUs) are delivering 3 to 5 times better performance when it comes to training AI models than they did a year ago, according to the latest MLPerf benchmarks published yesterday.

The MLPerf benchmark is maintained by the MLCommons Association, a consortium backed by Alibaba, Facebook AI, Google, Intel, Nvidia, and others that acts as an independent steward.

The latest set of benchmarks span eight different workloads covering a range of use cases for AI model training, including speech recognition, natural language processing, object detection, and reinforcement learning. Nvidia claims its OEM partners were the only systems vendors to run all the workloads defined by the MLPerf benchmark across a total of 4,096 GPUs. Dell, Fujitsu, Gigabyte Technology, Inspur, Lenovo, Nettrix, and Supermicro all provided on-premises systems certified by Nvidia that were used to run the benchmark.

Nvidia claims that overall it improved more than any of its rivals, delivering as much as 2.1 times more performance than the last time the MLPerf benchmarks were run. Those benchmarks provide a reliable point of comparison that data scientists and IT organizations can use to make an apples-to-apples comparison between systems, said Paresh Kharya, senior director for product management for Nvidia. “MLPerf is an industry-standard benchmark,” he said.

Trying to quantify the unknown

It’s not clear to what degree IT organizations are relying on consortiums’ benchmarks to decide what class of system to acquire. Each workload deployed by an IT team is fairly unique, so benchmarks are no guarantee of actual performance. Arguably, the most compelling thing about the latest benchmark results is they show that systems acquired last year or even earlier continue to improve in overall performance as software updates are made. That increased level of performance could reduce the pace at which Nvidia-based systems may need to be replaced.

Of course, the number of organizations investing in on-premises IT platforms to run AI workloads is unknown. Some certainly prefer to train AI models in on-premises IT environments for a variety of security, compliance, and cloud networking reasons. However, the cost of acquiring a GPU-based server tends to make consuming GPUs on demand via a cloud service a more attractive alternative for training AI models until the organization hits a certain threshold in number of models being trained simultaneously.

Alternatively, providers of on-premises platforms are increasingly offering pricing plans that enable organizations to consume on-premises IT infrastructure using the same model as a cloud service provider.

Other classes of processors might end up being employed to train an AI model. Right now, however, GPUs — thanks to their inherent parallelization capabilities — have proven themselves to be the most efficient option.

Regardless of the platform employed, the number of AI models being trained continues to steadily increase. There is no shortage of use cases involving applications that could be augmented using AI. The challenge in many organizations now is prioritizing AI projects given the cost of GPU-based platforms. Of course, as consumption of GPUs increases, the cost of manufacturing them will eventually decline.

As organizations create their road maps for AI, they should be able to safely assume that both the amount of time required and the total cost of training an AI model will continue to decline in the years ahead — even allowing for the occasional processor shortage brought on by unpredictable “black swan” events such as the COVID-19 pandemic.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google Workspace gains client-side encryption amid slew of new security features

Elevate your enterprise data technology and strategy at Transform 2021.


Google on Monday announced new security features for Google Workspace and Google Drive to better ensure data security and privacy protection in hybrid work environments.

The new security tools include client-side encryption for Workspace and several enhanced data protection features in the platform’s Drive service, including more granular control over trust rules, enhanced phishing and malware protection, and the addition of integrated data loss prevention (DLP) in Drive labels.

Google director of product management Karthik Lakshminarayanan told VentureBeat that the new security features in the search giant’s enterprise collaboration platform were the result of several factors, including Google’s “security first” philosophy, the rapid increase in remote work environments due to the pandemic, and the company’s experience with its BeyondCorp zero trust security model.

BeyondCorp Enterprise, released earlier this year, is based on Google’s own internal security framework developed over more than a decade.

“Our security model has been built around the fact that just being in the office doesn’t give users any additional security. For years, we have been building the BeyondCorp model around the philosophy of being able to securely work from anywhere,” Lakshminarayanan said.

“So, if you’re out of the office and your laptop tanks, maybe you need to spin up a personal tablet or a phone to work. That’s not a managed device and now we have to take this into account and make our data access controls more granular, stricter. We have to adapt security to the conditions people actually work in.”

Putting data encryption control in customers’ hands

To this end, the introduction of client-side encryption for Workspace gives Google’s enterprise customers direct control of the encryption keys for their data, making data at rest an in transit on the platform “indecipherable to Google,” Lakshminarayanan said. The new encryption controls will be rolled out for beta testing by customers “in the coming weeks,” he said.

Previously, Google alone handled the encryption of customer data in Workspace. The new client-side encryption capabilities are aimed at organizations that need direct control over sensitive or regulated data for security and compliance reasons, such as Airbus, an early tester of the new capabilities.

In beta testing of the new feature for Google Workspace Enterprise Plus and Google Workspace Education Plus, customers will be able to choose encryption key access services from Google partners FlowCrypt, Futurex, Thales, and Virtru.

Google will first make client-side encryption available for Workspace services Drive, Docs, Sheets, and Slides, promising support for multiple file types such as Microsoft Office files and PDFs. The new controls will be made available for Google Meet in the fall of this year, with support for Gmail and Calendar also planned at an unannounced time.

Fortifying Google Drive for more secure collaboration

Google also unveiled some security enhancements for Drive, the company’s file storage and synchronization service subscribed to by more than 1 billion users around the world.

In the coming months, Workspace Enterprise and Workspace Education Plus customers will be able to access new trust rules that give IT administrators greater control over how files can be shared with Drive inside and outside of their organizations. Lakshminarayanan said the new rules allow for more customizable file-sharing permissions for organizational units and groups, in contrast with “blanket” policies available now.

Google’s Drive labels, used to classify security levels for files stored in Drive, now incorporate Google’s DLP for Workspace. With labels, users can classify content so it is stored under retention policies for different sensitivity levels set by IT administrators. Admins can also create rules to automate classification of files, using 60 new AI-powered content detectors which can identify sensitive content such as “resumes, SEC filings, patents, and source code,” according to Google.

Drive labels are available in beta now for Google Workspace Business Standard, Workspace Business Plus, Workspace Enterprise, Workspace for Education Standard, and Workspace Education Plus.

Google will in the coming weeks be adding new internal protections against phishing and malware for Drive. The file storage service currently protects against such threats from external sources, but the enhancement will add safeguards against phishing and malware that originates within an organization, whether by a malicious actor or unintentionally via user error or a compromised system. Google said all future Workspace SKUs will include the new internal phishing and malware protections.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

AMD Ryzen Roadmap Shows Big Gains For Gaming Laptops

Leaker @Broly_X1 shared a road map on Friday detailing AMD’s mobile plans through 2022. The road map confirms and adds context to multiple leaks and rumors, showing that AMD is working on a Zen 3+ refresh for mobile, which is slated to launch in 2022.

The two green sections of the road map are what matter. Rembrandt is the code name for the tentative Ryzen 6000 mobile chips. We don’t know if that’s the name AMD will end up going with, but it’s as good a guess as any. The road map shows that these mobile chips will be built on the 6nm Zen 3+ architecture, which is a revision of the 7nm Zen 3 architecture that’s currently available.

Assuming AMD will continue partnering with semiconductor manufacturer TSMC, these chips will show modest improvements over their Zen 3 counterparts. The 7nm process and 6nm process are closely related at TSMC, with the latter showing minor improvements. Basically, Rembrandt isn’t an entirely new generation of processors.

The mobile chips will also feature RDNA 2 graphics cores. AMD’s current Ryzen 5000 mobile chips still use the older Vega graphics cores, so Rembrandt will offer a significant boost to gaming laptops. These cores could provide hardware-accelerated ray tracing, too, just like the RDNA 2-based RX 6000 graphics cards. They’re rumored to offer up to a 50% increase in graphics performance.

Rembrandt “U” chips are targeting thin and light laptops with only 15 watts of power, while Rembrandt “H” chips will go up to 45 watts for dedicated gaming laptops. Both chips will feature support for PCIe 4.0, DDR5, and USB 4. AMD will likely pair these chips with RX 6000M laptop graphics cards, which are rumored to launch soon.

The road map also reveals some other, less exciting chips coming down the pike. Van Gogh and Dragon Crest chips are special designs for devices without a lot of power. Dali and Pollock chips are already out on the market, built specifically for manufacturers and targeted at budget laptops. AMD will release Barcelo processors alongside Rembrandt, likely using the same branding but targeting lower-spec machines. That’s what AMD is doing with Ryzen 5000 mobile chips.

Laptop enthusiasts have a lot to look forward to. Despite AMD stomping through the desktop market, it has struggled to gain a foothold in the mobile space. Bolstering the already powerful Zen 3 architecture with RDNA 2 graphics and a more power-efficient design is a winning strategy, but we haven’t seen what Intel has in store for 2022 yet.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

AMD Unveils Ryzen 5000G, Boasting Huge Gains Over Intel

AMD is starting to ship its Ryzen 5000G APUs to manufacturers, and DIYers hoping to get their hands on the processors for later this year. The Ryzen 5000G series currently encompasses six models, ranging from the entry-level Ryzen 3 model to the high-end Ryzen 7 5700G, which is configured with eight cores and 16 threads and has a base clock speed of 3.8 GHz that can go to 4.6 GHz and has a maximum TDP of 65W.

In addition to the Ryzen 7 5700G, other models of AMD’s Ryzen 5000G family include the Ryzen 7 5700GE, Ryzen 5 5600G, Ryzen 5 5600GE, Ryzen 3 5300G, and Ryzen 3 5300GE. Of these models, the Ryzen 7 5700G, Ryzen 5 5600G, and Ryzen 3 5300G have a maximum TDP of 65W, while the Ryzen 7 5700GE, Ryzen 5 5600GE, and Ryzen 3 5300GE are low powered models that have a maximum TDP of 35W.

It should be noted — AMD benchmarked its processors against Intel’s older 10th-gen CPUs, rather than the newer 11th-gen processors with better Intel Xe graphics. The Ryzen 7 5700G will be 38% faster at content creation, 35% better in productivity tasks and 80% better in general computing performance than an Intel Core i7-10700, according to AMD’s own metrics in a report published by Guru3D.

The company revealed that these processors will be arriving “in the coming weeks” to pre-built systems.

The Ryzen 5000G APU was previously known by its codename of Cezanne. These processors are based on the 7nm Zen 3 microarchitecture alongside integrated onboard Vega graphics. The Ryzen 7 5700G will have an integrated Radeon GPU with eight graphics cores or 512 stream processors running at a frequency of 2.0 GHz, for example, while the Ryzen 3 5300G will have an integrated GPU with six graphics cores.

Pricing information was not immediately available, as these processors are headed to pre-built systems from PC manufacturers first, and we expect AMD will disclose pricing details when the chips are available for consumers to purchase.

Given the global semiconductor shortage that’s plaguing the CPU and GPU markets, it’s unclear if AMD’s latest 5000G series APUs will be well-stocked enough. Given that the prior generation Ryzen 4000 APUs were only available on pre-built systems, AMD’s expansion of the Ryzen 5000G to DIYers is good news for gamers looking to build their own rigs rather than invest in a pre-configured machine.

Editors’ Choice




Repost: Original Source and Author Link