Categories
Computing

Windows 11 Gains Back A Highly Requested Taskbar Feature

Microsoft has confirmed the weather widget that was initially introduced via Windows 10 will be integrated into Windows 11. A new voice access feature for the operating system has also been added.

The latest Insider preview build for Windows 11, dubbed . 22518, displays live weather content on the left side of the taskbar. Users will also be able to open the widgets board by hovering over the entry point.

Another addition to Windows 11’s latest preview build is voice access, which allows users to control several aspects of their PC and author text through voice commands.

The new feature lets you open and switch between apps, browse the web, and control the mouse and keyboard. For example, you can click an item like a button or a link by saying “click start” or “click cancel.”Similarly, you’ll be able to open an application by saying “open Edge” or “open Word.”

Other elements that can be interacted with through one’s voice include searching and editing text, as well as interacting with overlays. Microsoft provided a full list of commands for voice access, but pointed out that it only supports the English-U.S option in the display language section.

Microsoft has also made it easier for new users to install the Windows Subsystem for Linux through the Microsoft Store.

Microsoft is also introducing the Spotlight collection to Windows 11, which will “keep your desktop fresh and inspiring.” New desktop pictures from around the world will be offered every day, accompanied by various facts pertaining to the picture itself.

Besides Insider preview build 22518, Microsoft recently released a redesigned Notepad. An updated user interface brings changes like rounded corners, but the most exciting inclusion is a dark mode component. The new-look Notepad also addresses a “top community feature request” by adding support for multilevel undo.

There’s currently no timeline for when all these changes will become available for all Windows 11 users, but expect a rollout sometime during 2022. As for the latest preview build that’s been released in the Windows 11 Dev Channel, Microsoft stated that it won’t be offered to ARM64 PCs due to an issue that it’s currently working to fix.

Regarding Microsoft’s plan for Windows 11 in 2022, one area of focus for the company largely revolves around performance, with the tech giant highlighting how improving the responsiveness of the new operating system will be a priority.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Windows 11 Adds Highly Requested Taskbar Feature

Microsoft is making it easy for you to mute your microphone when you don’t want to be heard on Microsoft Teams calls. Rolling out in the latest Windows 11 Dev Channel build is a new mute icon in the taskbar for when communications apps like Microsoft Teams are in use.

The initial iteration of this icon works as you’d expect, though it is currently only for select Windows 11 beta testers. Instead of having to manually search for the mute button in Teams, you can click the Microphone icon in the Windows 11 taskbar and choose the Mute option. You also can use the icon to see your call audio status and which app is accessing your microphone. The icon will be present all throughout your call, no matter how many windows you have open or what is on your screen.

“No more awkward or embarrassing moments when you forget to unmute or mute your microphone. You can now communicate and collaborate with confidence and ease using the new call mute feature on Windows 11,” wrote Amanda Langowski and Brandon LeBlanc, who head the Windows Insider Program.

This initial mute button only works with the desktop version of Microsoft Teams with school or work accounts, and not all Windows 11 Dev Channel insiders might see it. Microsoft is planning to ramp up the rollout of the icon over time and also bring support for it to the Windows 11 chat app soon. Other communication apps like Slack, Zoom, or Google Meet can tap into the feature and add the capability as well, though it appears to be up to those app developers to enable it.

Once beta testing is complete, Microsoft is planning to roll out the mute icon to the regular version of Windows 11. It says this will be done in a future servicing update. When everyone has it, this would be the latest time-saving feature to be added to Windows 11. Other features include Snap Layouts, the Widgets app, and the centered Start Menu that shows links to the most recent files and apps.

The Windows Insider build, which brings this new microphone mute icon, also addresses several other issues in Windows 11, ranging from the File Explorer and taskbar to search. If you really want to experience this for yourself, you can opt your Windows 11 PC into the Dev Channel of the Windows Insider Program to get it. But, as Microsoft said, not every Windows Insider will be seeing this. And keep in mind that Windows 11 Dev Channel builds are known to be unstable.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Data labeling for AI research is highly inconsistent, study finds

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


Supervised machine learning, in which machine learning models learn from labeled training data, is only as good as the quality of that data. In a study published in the journal Quantitative Science Studies, researchers at consultancy Webster Pacific and the University of California, San Diego and Berkeley investigate to what extent best practices around data labeling are followed in AI research papers, focusing on human-labeled data. They found that the types of labeled data range widely from paper to paper and that a “plurality” of the studies they surveyed gave no information about who performed labeling — or where the data came from.

While labeled data is usually equated with ground truth, datasets can — and do — contain errors. The processes used to build them are inherently error-prone, which becomes problematic when these errors reach test sets, the subsets of datasets researchers use to compare progress. A recent MIT paper identified thousands to millions of mislabeled samples in datasets used to train commercial systems. These errors could lead scientists to draw incorrect conclusions about which models perform best in the real world, undermining benchmarks.

The coauthors of the Quantitative Science Studies paper examined 141 AI studies across a range of different disciplines, including social sciences and humanities, biomedical and life sciences, and physical and environmental sciences. Out of all of the papers, 41% tapped an existing human-labeled dataset, 27% produced a novel human-labeled dataset, and 5% didn’t disclose either way. (The remaining 27% used machine-labeled datasets.) Only half of the projects using human-labeled data revealed whether the annotators were given documents or videos containing guidelines, definitions, and examples they could reference as aids. Moreover, there was a “wide variation” in the metrics used to rate whether annotators agreed or disagreed with particular labels, with some papers failing to note this altogether.

Compensation and reproducibility

As a previous study by Cornell and Princeton scientists pointed out, a major venue for crowdsourcing labeling work is Amazon Mechanical Turk, where annotators mostly originate from the U.S. and India. This can lead to an imbalance of cultural and social perspectives. For example, research has found that models trained on ImageNet and Open Images, two large, publicly available image datasets, perform worse on images from Global South countries. Images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan compared to images of grooms from the U.S.

For annotators, labeling tasks tend to be monotonous and low-paying — ImageNet workers made a median of $2 per hour in wages. Unfortunately, the Quantitative Science Studies survey shows that the AI field leaves the issue of fair compensation largely unaddressed. Most publications didn’t indicate what type of reward they offered to labelers or even include a link to the training dataset.

Beyond doing a disservice to labelers, the lack of links threatens to exacerbate the reproducibility problem in AI. At ICML 2019, 30% of authors failed to submit code with their papers by the start of the conference. And one report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing answers.

“Some of the papers we analyzed described in great detail how the people who labeled their dataset were chosen for their expertise, from seasoned medical practitioners diagnosing diseases to youth familiar with social media slang in multiple languages. That said, not all labeling tasks require years of specialized expertise, such as more straightforward tasks we saw, like distinguishing positive versus negative business reviews or identifying different hand gestures,” the coauthors of the Quantitative Science Studies paper wrote. “Even the more seemingly straightforward classification tasks can still have substantial room for ambiguity and error for the inevitable edge cases, which require training and verification processes to ensure a standardized dataset.”

Moving forward

The researchers avoid advocating for a single, one-size-fits-all solution to human data labeling. However, they call for data scientists who choose to reuse datasets to exercise as much caution around the decision as they would if they were labeling the data themselves — lest bias creep in. An earlier version of ImageNet was found to contain photos of naked children, porn actresses, and college parties, all scraped from the web without those individuals’ consent. Another popular dataset, 80 Million Tiny Images, was taken offline after an audit surfaced racist, sexist, and otherwise offensive annotations, such as nearly 2,000 images labeled with the N-word and labels like “rape suspect” and “child molester.”

“We see a role for the classic principle of reproducibility, but for data labeling: does the paper provide enough detail so that another researcher could hypothetically recruit a similar team of labelers, give them the same instructions and training, reconcile disagreements similarly, and have them produce a similarly labeled dataset?” the researchers wrote. “[Our work gives] evidence to the claim that there is substantial and wide variation in the practices around human labeling, training data curation, and research documentation … We call on the institutions of science — publications, funders, disciplinary societies, and educators — to play a major role in working out solutions to these issues of data quality and research documentation.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

AMD’s Highly Powerful RX 6900 XT Sets a New World Record

Team OGS, or Overclocked Gaming Systems, achieved a new world record with the AMD RX 6900 XT. The Greek overclockers were able to push the card to 3.3GHz, the fastest clock speed ever for a graphics card. The achievement comes less than a month after a previous world record was set by Der8auer, who achieved a speed of 3.2GHz on the same PowerColor Liquid Devil Ultimate card.

The group was able to reach such speeds thanks to the Navi 21 XTXH GPU. Originally, the Navi 21 GPU inside the 6900 XT had an artificial clock limit of 3.0GHz. The updated XTXH variant ups the limit to 4.0GHz, offering more headroom for extreme overclockers to take advantage of.

Team OGS used the PowerColor Liquid Devil Ultimate card on an LN2 rig, just like Der8auer. The Liquid Devil is about as high-end as graphics cards get, shipping with binned GPUs for peak performance, a 14+2 VRM design, and three 8-pin power connectors. It also comes with a preinstalled waterblock, but both Team OGS and Der8auer removed the block to cool the card with liquid nitrogen.

Outside of the card, Team OGS used an AMD Ryzen 9 5950X CPU overclocked to 5.6GHz and an Asus ROG Crosshair VIII Dark Hero motherboard. Reaching 3.3GHz is a feat alone, but OGS ran the rig through 3DMark Fire Strike Extreme to offer some theoretical performance numbers for such an insane rig.

The results are, unsurprisingly, equally as insane. The 6900 XT achieved a graphics score of 41,069, and the rig as a whole earned a combined score of 37,618. For context, those results are better than 99% of all others. Due to a driver issue, the results are invalid, so it won’t show up on the official Fire Strike Extreme leaderboard. If they were valid, they would rank eighth, directly under a rig sporting four Nvidia Titan X graphics cards in SLI.

The 6900 XT is the fastest graphics card AMD currently offers, and the new world record shows just how capable the RDNA 2 architecture is. This likely isn’t the last time we’ll see the 6900 XT breaking records.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

EU report warns that AI makes autonomous vehicles ‘highly vulnerable’ to attack

The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.

“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.”

The range of AI systems and sensors needed to power autonomous vehicles increases the attack surface area, according to the report. To address vulnerabilities, its authors say policymakers and businesses will need to develop a security culture across the automotive supply chain, including for third-party providers. The report urges car manufacturers to take steps to mitigate security risks by thinking of the creation of machine learning systems as part of the automotive industry supply chain.

The report focuses on cybersecurity attacks with adversarial machine learning that carries the risk of malicious attacks undetectable to humans. The report also finds that the use of machine learning in cars will require a continuous review of systems to ensure they haven’t been altered in a malicious way.

“AI cybersecurity cannot just be an afterthought where security controls are implemented as add-ons and defense strategies are of reactive nature,” the paper reads. “This is especially true for AI systems that are usually designed by computer scientists and further implemented and integrated by engineers. AI systems should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate.”

Scenarios presented in the report include the possibility of attacks on motion planning and decision-making algorithms and spoofing, like the kind that can fool an autonomous vehicle into “recognizing” cars, people, or walls that don’t exist.

In the past few years, a number of studies have shown that physical perturbations can fool autonomous vehicle systems with little effort. In 2017, researchers used spray paint or stickers on a stop sign to fool an autonomous vehicle into misidentifying the sign as a speed limit sign. In 2019, Tencent security researchers used stickers to make Tesla’s Autopilot swerve into the wrong lane. And researchers demonstrated last year that they could lead an autonomous vehicle system to quickly accelerate from 35 mph to 85 mph by strategically placing a few pieces of tape on the road.

 

The report was coauthored by the Joint Research Centre, a science and tech advisor to the European Commission. Weeks ago, ENISA released a separate report detailing cybersecurity challenges created by artificial intelligence.

In other autonomous vehicle news, last week Waymo began testing robo-taxis in San Francisco. But an MIT task force concluded last year that autonomous vehicles could be at least another decade away.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link