Malwarebytes resolves error that was blocking Google this morning

Are you suddenly getting a lot of alerts from your Malwarebytes app? Are Google-owned websites and services suddenly not working anymore, perhaps even including the browser, Google Chrome? It’s not just you.

Many Malwarebytes users are currently experiencing these problems, and currently, there’s only one fix — disabling the software entirely. The question is, where does the problem lie? Is Google truly compromised or is Malwarebytes just broken right now?

Update: Malwarebytes has announced that the issue is now resolved and issued a software update. However, some people are still reporting problems. If you’re experiencing issues, try to update your copy of Malwarebytes.

Malwarebytes is a well-known program that helps protect users from malware and viruses. It offers real-time protection as well as the ability to scan for malware. If any files are found to be infected, they can be put into quarantine. Normally, this program works great, but today, it certainly doesn’t.

Hundreds of Malwarebytes users turned to Twitter, Reddit, and the official Malwarebytes forums, all reporting the same issue: All Google services are currently being blocked as malware. Visiting websites such as the Google search engine, Gmail, Google Docs or Sheets, or even YouTube results in dozens of pop-up messages from Malwarebytes, all announcing that the website was blocked due to malware.

It doesn’t stop there, though. Some users, myself included, have experienced bluescreens when trying to open Google Chrome. Other browsers work fine, but they still do not load Google websites, so the problem seems to affect solely Google products.

It’s worth noting that Google services work fine on devices that are not running Malwarebytes, and they resume working the moment you disable the software, too. As such, it’s almost certain that the problem lies in Malwarebytes. Prior to updating the software, many users didn’t have this problem, so there might have been a faulty patch that is now circulating.

How to fix the problem

We are aware of a temporary issue with the web filtering component of our product that may be blocking certain domains, including We are actively working on a fix and will update Twitter as soon as we have more information.

— Malwarebytes (@Malwarebytes) September 21, 2022

Malwarebytes has announced that it’s aware of the problem and is “actively working on a fix.” However, it hasn’t been announced when this fix might happen, so until it does, many are left frustrated.

If you want to overcome this issue, the only fix right now (also recommended by Malwarebytes) is to turn off web protection. To do this, simply open Malwarebytes and then, on the right side of the app, toggle off Real Time Protection. Didn’t work? Quit Malwarebytes entirely. If that doesn’t help, you may have to reboot your computer and/or your router after disabling Malwarebytes.

Obviously, this is not ideal — you will be left without one of your computer’s major defenses if Malwarebytes is your usual go-to. Play it extra safe during this time and don’t visit any websites that could be dodgy. Check out our list of the best antivirus software if you want to make sure you’re protected in the meantime.

Hopefully, Malwarebytes will release an update soon and this issue will be resolved. Keep an eye on the Malwarebytes Twitter account if you want to download your patch as soon as possible.

Editors’ Choice

Repost: Original Source and Author Link


How to sort your data in Google Sheets

Google Sheets is a remarkably powerful and convenient tool for collecting and analyzing data, but sometimes it can be hard to understand what that raw data means. One of the best ways to see the big picture is to sort it to help bring the most important information to the top, showing which is the largest or smallest value relative to the rest.

It’s not surprising that Google Sheets has a powerful search feature. It’s also easy to sort data in Google Sheets, but a few concepts need to be clear to achieve the best result and glean the most valuable insights. With a few tips, you’ll quickly master sorting by one or more columns and be able to bring up different views for a better understanding of what the data means.

How to quickly sort a simple sheet

Sorting a Google sheet by a single column is quick and easy. For example, with a table of foods that are good sources of protein, you might want to sort by name, serving size, or amount of protein. Here’s how to do that.

Step 1: Move the mouse pointer over the column that you want to sort by and select the Downward arrow that appears to open a menu of options.

Step 2: A little over halfway down the menu are the sort options. Select Sort sheet A to Z to sort the entire sheet so the chosen text column is in alphabetical order. Sort sheet Z to A places that column in reverse alphabetical order. 

There are two text sort options.

Step 3: To sort a numerical column, follow the same procedure. Choose Sort sheet A to Z for a low-to-high order of numerical values or Sort sheet Z to A to see the highest values at the top.

The same sort options are available for numbers.

Step 4: No matter which column is sorted, the information from all other columns in the sheet is moved to keep the same order. In our example, a cup of walnuts continues to show as 30 grams, and a 2.5-ounce steak has 22 grams of protein.

Sorting a sheet by a column keeps row data aligned.

Step 5: If a numerical column shows values in more than one unit, the sort will fail to give the expected result. In our example, the Google Sheet about protein sources, the serving-size column includes cups, ounces, tablespoons, and slices. Units are ignored by the built-in sort feature conversion, so 1 cup will incorrectly be treated as if it’s smaller than 3 ounces. The best solution is to convert the data to use only one unit per column.

Google Sheets doesn't consider units when sorting.

Sorting a sheet with headers

The quick-sort method described above is convenient; however, it doesn’t work as expected when the sheet has headers. When you’re sorting the entire sheet, all rows are included by default, which can mix headings and units with the data. Google Sheets can lock cells so they can’t be changed, but it’s also possible to lock one or more rows in place at the top so headers don’t get confused with your information.

Step 1: To freeze the position of header rows, choose the Google Sheets View menu. Instead of using your browser’s view menu, open the View menu at the top of the web page that’s open. Then, select the Freeze option and choose 1 row or 2 rows, depending on the number of header rows in your sheet.

Freeze one or two rows to lock the header rows.

Step 2: If there are more headers, it’s possible to freeze more rows. Simply select a cell in the lowest header row, then choose Freeze, then Up to row X in the View* menu, where X is the row number.

Google Sheets can freeze more header rows.

Step 3: After freezing headers, a thick gray line appears to show where the split occurs. A sheet can then be sorted using the column menu’s Sort sheet A to Z or Sort sheet Z to A option. This leaves the headers in place at the top of your Google sheet while rearranging your data into a more useful table.

The column menu can sort while keeping frozen headers in place.

Sort by more than one column

A single-column sort is quick and handy, but often, there’s more than one variable to consider when comparing figures. In our example, you might be most interested in which has the least fat but also want to get more protein. That’s easy to do with a sort range.

Step 1: Select the cells that you want to sort, including one header row. This can be done quickly by choosing the top-left cell, then holding down the Control key (Command key on a Mac) and pressing the Right arrow to select the full width of the table.

Use Control+Right Arrow to select a horizontal range

Step 2: The same can be done by selecting the full height, holding Control, and pressing the Down arrow. Now the entire table of data will be selected.

Use Control+Right Arrow then Control+Down Arrow to select a range.

Step 3: From the Google Sheets Data menu, choose Sort range > Advanced range sorting options.

Google Sheets advanced sorting is in the Data menu

Step 4: A window will open that lets you pick multiple sort columns. If your sort range includes a header row, check the box beside Data has header row.

You can see the names of header rows if that box is checked

Step 5: The Sort by field will now show your header column names instead of making you choose columns by using their letter designation. Pick the primary sort column — for example, Fat (g) — and make sure A-Z is selected for sort order.

With header row checked, the names show up in the Sort by field.

Step 6: Choose the Add another sort column button to have a secondary sort, such as protein. It’s possible to add as many sort columns as you want before selecting the Sort button to see the results.

You can add more sort columns before sorting your data.

Save a data range

Instead of selecting a range every time you want to re-sort it, you can save the table as a named range.

Step 1: Select the range of data that you want to save for easier access, then choose Named ranges from the Google Sheets Data menu.

Named ranges can be selected faster.

Step 2: A panel will open on the right, and you can type a name for this range.

Type a name for the selected range.

Step 3: The Named ranges panel will remain open, and you can select the entire range again by choosing it from this panel.

Choose a named range to select all of its data.

Saving a sort view

It’s also possible to save more than one advanced sort operation for easy access in the future. This lets you switch between different views when making a presentation or when you need to analyze information from different angles. This is possible by creating a Google Sheets filter view.

Step 1: Select the range of data you want to sort, then choose Create new filter view from the Filters submenu in Google Sheets Data menu.

Create a filter view to save advanced sorts.

Step 2: A new bar will appear at the top of the sheet with filter view options. Choose the name field and type a name for this filter view, such as “Low Fat/High Protein.”

Filter views can be named so they are easy to remember

Step 3: The name of each header column will now include a sort menu on the right side. Using our example data, choose the Protein sort menu and then select Sort Z-A to place the highest values at the top.

Use the header row's sort menu

Step 4: Repeat this process on the Fat sort menu but choose Sort A-Z to show the lowest values first. This new filter view will show a sort with low fat being the top priority and high protein as a secondary consideration.

Multiple columns can be sorted with a filter view.

Step 5: You can create more filter views in the same way and use as many sort columns as needed. To load a filter view, open the Data menu, select Filter views, then select the view that you’d like to see.

Filter views can be loaded in the Data menu.

How to share a sorted Google Sheet

Google Sheets is readily available to anyone with an internet connection, making this a great tool for sharing information with others.

Step 1: Sorted spreadsheets can be shared by selecting the big green Share button in the upper-right corner.

The Share button makes it easy to collaborate with others.

Step 2: It’s also possible to download a Google Sheet to share over email or print it to send via regular mail.

You can also share a Google sheet via email.

Google Sheets provides several ways to sort data, and this can make in big difference when trying to analyze a complicated data set. For more information, check out our complete beginner’s guide that shows how to use Google Sheets.

We also have a guide that explains how to make graphs and charts in Google Sheets, a great way to see data in an easy-to-digest visual form.

Editors’ Choice

Repost: Original Source and Author Link


Google now owns Mandiant, the firm that found SolarWinds

Google has completed its acquisition of Mandiant, bringing a major name in cybersecurity under the tech giant’s ever-growing umbrella.

The $5.4 billion acquisition, announced in March, was completed on Monday, according to a Google press release. Per details in the release, Mandiant will keep its own brand while operating under the Google Cloud branch of its new parent company. Google Cloud is the cloud computing platform offered by Google and provides cloud computing and data storage infrastructure for other companies to build products on top of.

Mandiant is best known for uncovering the SolarWinds hack, a massive Russia-linked breach that compromised US government agencies including the departments of Homeland Security, State, Defense, and Commerce.

In a blog post, Google Cloud CEO Thomas Kurian highlighted Mandiant’s threat intelligence expertise and said that Google intends to combine that with its enormous data processing and machine learning capabilities to protect customers from cyber threats.

“Our goal is to democratize security operations with access to the best threat intelligence and built-in threat detections and responses,” Kurian wrote. “Ultimately, we hope to shift the industry to a more proactive approach focused on modernizing Security Operations workflows, personnel, and underlying technologies to achieve an autonomic state of existence – where threat management functions can scale as customers’ needs change and as threats evolve.”

Google already has significant threat intelligence capabilities, with perhaps the best known among them being the Threat Analysis Group (TAG) — a team that tracks and counters state-backed hacking attempts. But the Mandiant acquisition will add hundreds more expert threat analysts to Google’s ranks, lending even more security proficiency to shore up Google’s cloud offerings.

Microsoft was also rumored to be considering an acquisition deal to buy Mandiant earlier this year but was beaten to the punch by Google in a sign of the growing importance of the cloud security market.

Kevin Mandia, CEO of Mandiant, wrote in a blog post that, in a threat landscape where “criminals, nation states, and plain bad actors bring harm to good people,” the acquisition would improve his company’s ability to respond.

“By combining our expertise and intelligence with the scale and resources of Google Cloud, we can make a far greater difference in preventing and countering cyber attacks, while pinpointing new ways to hold adversaries accountable,” Mandia said.

Repost: Original Source and Author Link


Former Conti ransomware gang members helped target Ukraine, Google says

A cybercriminal group containing former members of the notorious Conti ransomware gang is targeting the Ukrainian government and European NGOs in the region, Google says.

The details come from a new blog post from the Threat Analysis Group (TAG), a team within Google dedicated to tracking state-sponsored cyber activity.

With the war in Ukraine having lasted more than half a year, cyber activity including hacktivism and electronic warfare has been a constant presence in the background. Now, TAG says that profit-seeking cybercriminals are becoming active in the area in greater numbers.

From April through August 2022, TAG has been following “an increasing number of financially motivated threat actors targeting Ukraine whose activities seem closely aligned with Russian government-backed attackers,” writes TAG’s Pierre-Marc Bureau. One of these state-backed actors has already been designated by CERT — Ukraine’s national Computer Emergency Response Team — as UAC-0098. But new analysis from TAG links it to Conti: a prolific global ransomware gang that shut down the Costa Rican government with a cyberattack in May.

“Based on multiple indicators, TAG assesses some members of UAC-0098 are former members of the Conti cybercrime group repurposing their techniques to target Ukraine,” Bureau writes.

The group known as UAC-0098 has previously used a banking Trojan known as IcedID to carry out ransomware attacks, but Google’s security researchers say it is now shifting to campaigns that are “both politically and financially motivated.” According to TAG’s analysis, the members of this group are using their expertise to act as initial access brokers — the hackers who first compromise a computer system and then sell off access to other actors who are interested in exploiting the target.

Recent campaigns saw the group send phishing emails to a number of organizations in the Ukrainian hospitality industry purporting to be the Cyber Police of Ukraine or, in another instance, targeting humanitarian NGOs in Italy with phishing emails sent from the hacked email account of an Indian hotel chain.

Other phishing campaigns impersonated representatives of Starlink, the satellite internet system operated by Elon Musk’s SpaceX. These emails delivered links to malware installers disguised as software required to connect to the internet through Starlink’s systems.

The Conti-linked group also exploited the Follina vulnerability in Windows systems shortly after it was first publicized in late May of this year. In this and other attacks, it is not known exactly what actions UAC-0098 has taken after systems have been compromised, TAG says.

Overall, the Google researchers point to “blurring lines between financially motivated and government backed groups in Eastern Europe,” an indicator of the way cyber threat actors often adapt their activities to align with the geopolitical interests in a given region.

But it’s not always a strategy guaranteed to win. At the start of the Ukraine invasion, Conti paid the price for openly declaring support for Russia when an anonymous individual leaked access to over a year’s worth of the group’s internal chat logs.

Repost: Original Source and Author Link


Here’s why you need to update your Google Chrome right now

Google has just released a new version of Chrome, and it’s crucial that you get your browser updated as soon as possible.

The patch was deployed to fix a major zero-day security flaw that could potentially pose a risk to your device. The latest update is now available for Windows, Mac, and Linux — here’s how to make sure your browser is safe.


The vulnerability, now referred to as CVE-2022-3075, was discovered by an anonymous security researcher and reported straight to Google. It was caused by sub-par data validation in Mojo, which is a collection of runtime libraries. Google doesn’t say much beyond that, and that makes sense — the vulnerability is still out in the wild, so it’s better to not make the exact details public just yet.

What we do know is that the vulnerability was assigned a high priority level, which means that it could potentially be dangerous if abused. Suffice it to say that it’s better if you update your browser right now.

Although Google is keeping the information close right now, this is an active vulnerability, and once spotted, it could be taken advantage of on devices that haven’t downloaded the latest patch. The patch, said to fix the problem, is included in version 105.0.5195.102 of Google Chrome. Google predicts that it might take a few days or even weeks until the entire user base receives automatic access to the new fix.

How to stay safe

Hands on a laptop.
EThamPhoto / Getty Images

Your browser should download the update automatically the next time you open it. If you want to double-check and make sure you’re up to date, open up your Chrome Menu and then follow this path: Help -> About Google Chrome. Alternatively, you can simply type “Update Chrome” into the address bar and then click the result that pops up below your search, before you even confirm it.

You will be asked to re-launch the browser once the update has been downloaded. If it’s not available to you yet, make sure to check back shortly, as Google will be rolling it out to more and more users.

Google Chrome continues to be a popular target for various cyberattacks and exploits. It’s not even just the browser itself that is often targeted, but its extensions, too. To that end, make sure to only download and use extensions from reputable companies, and don’t be too quick to stack too many of them at once. We have a list of some of the best Chrome extensions if you want to pick out the ones that are trustworthy.

Editors’ Choice

Repost: Original Source and Author Link


Google Chrome’s latest update has a security fix you should install ASAP

Google Chrome users on Windows, Mac, and Linux need to install the latest update to the browser to protect themselves from a serious security vulnerability that hackers are actively exploiting.

“Google is aware of reports that an exploit for CVE-2022-3075 exists in the wild,” the company said in a September 2nd blog post. An anonymous tipster reported the problem on August 30th, and Google says it expects the update to roll out to all users in the coming days or weeks.

The company hasn’t released much information yet on the nature of the bug. What we know so far is that it has to do with “Insufficient data validation” in Mojo, a collection of runtime libraries used by Chromium, the codebase that Google Chrome’s built on.

“Access to bug details and links may be kept restricted until a majority of users are updated with a fix,” the company said. By keeping those details under wraps for now, Google makes it harder for hackers to figure out how to exploit the vulnerability before the new update closes the opportunity for attacks.

Chrome users need to relaunch the browser to activate the update. This will update Chrome to version 105.0.5195.102 for Windows, Mac, and Linux. To make sure you’re using the latest version, click the icon with the three dots in the top right corner of your browser. Navigating to “Help,” and then “About Google Chrome” will lead you to a page that tells you whether Chrome is up to date on your device.

This latest update comes just days after Google released Chrome version 105 on August 30th. That update already came with 24 security fixes. Apparently, that still wasn’t enough.

This is the sixth zero-day vulnerability Chrome has faced so far this year. The last vulnerability that was actively exploited was just flagged in mid-August, BleepingComputer reported.

Repost: Original Source and Author Link


Google is adding tools to make Meet video calls look better

There’s a good chance you’ve spent much of the last two-plus years sitting at home, cycling through endless days of virtual meetings staring into your laptop’s webcam and talking into your built-in mic. This means you’ve spent much of the last two-plus years appearing to everyone else like a mushy pile of poorly lit pixels, sounding like you’re shouting from inside a tin can. It’s not your fault: your laptop’s webcam just sucks. And so does its mic. But Google thinks it can fix them both with AI.

Google announced on Wednesday at its annual I/O developer conference that its Workspace team has been working on a couple of AI-powered ways to improve your virtual meetings. The most impressive is Portrait Restore, which Google says can automatically improve and sharpen your image even over a bad connection or through a bad camera. Portrait Lighting, similarly, gives you a set of AI-based controls over how you’re lit. You can’t move the window off to your left, Google seems to be saying, but you can make Google Meet look as if you had one to your right as well. And when it comes to sound, Google’s rolling out a de-reverberation tool meant to minimize the echoes that come from talking into your laptop from a boxy home office.

Portrait Lighting gives you AI-powered control of how you look on camera.
Image: Google

A lot of the underlying tech here comes from the AI and machine learning work Google has done with its Pixel phones. Those have substantially better hardware to work with than your average laptop webcam, but Prasad Setty, the company’s VP of digital work experience, said the principle is the same. “We want to make sure that the underlying software does the same thing, that we are able to use it across a wide range of hardware devices,” he said.

As hybrid and remote work have grown, the Google Workspace team has spent the last couple of years thinking about how to make work a little easier, Setty said. “We want technology to be an enabler,” Setty said in an interview. “We want it to be helpful, we want it to be intuitive, and we want it to solve real problems.” That led the Workspace team to think more about collaboration — hence the meeting tools — but also about how to make asynchronous work more palatable.

Automated transcriptions from Google Meet, and automatic summaries from Spaces

Automated transcriptions from Google Meet, and automatic summaries from Spaces
Image: Google

Google’s planning to roll out a new tool that generates automatic summaries of Spaces activity, so you can log on in the morning and catch up without having to read hundreds of messages. It’s also launching an automated transcription service for Meet meetings, with plans to eventually also summarize those.

“We want to be able to help people handle this information overload,” Setty said, and use AI to do so. He also said Google’s thinking a lot about “collaboration equity” and “representation equity,” trying to help keep everyone on an equal playing field no matter where they are, what tech they’re using, or how they’re working. One trick for Google, Setty acknowledged, is in helping people without getting too involved or making employees feel like they’re being watched by either Google or their employer. “The way we think about it,” he said, “is we want to empower users first and foremost. And then give them like the choice of like how they expose that information to their teams and so on.”

After all this time stuck at home, it’s nice to have a few tools to make your setup work a little better, especially ones that don’t require new apps or gear. But as people go back to the office, Google has an even bigger meeting challenge ahead: solving the problem of the hybrid meeting, with some people in a room and others on a screen. That’s going to take a lot more than good lighting and de-reverb.


Repost: Original Source and Author Link


Google is using a new way to measure skin tones to make search results more inclusive

Google is partnering with a Harvard professor to promote a new scale for measuring skin tones with the hope of fixing problems of bias and diversity in the company’s products.

The tech giant is working with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST Scale is designed to replace outdated skin tone scales that are biased towards lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for people with darker coloring, says Monk.

“Unless we have an adequate measure of differences in skin tone, we can’t really integrate that into products to make sure they’re more inclusive,” Monk tells The Verge. “The Monk Skin Tone Scale is a 10-point skin tone scale that was deliberately designed to be much more representative and inclusive of a wider range of different skin tones, especially for people [with] darker skin tones.”

There are numerous examples of tech products, particularly those that use AI, that perform worse with darker skin tones. These include apps designed to detect skin cancer, facial recognition software, and even machine vision systems used by self-driving cars.

Although there are lots of ways this sort of bias is programmed into these systems, one common factor is the use of outdated skin tone scales when collecting training data. The most popular skin tone scale is the Fitzpatrick scale, which is widely used in both academia and AI. This scale was originally designed in the ’70s to classify how people with paler skin burn or tan in the sun and was only later expanded to include darker skin.

This has led to some criticism that the Fitzpatrick scale fails to capture a full range of skin tones and may mean that when machine vision software is trained on Fitzpatrick data, it, too, is biased towards lighter skin types.

The 10-point Monk Skin Tone Scale.
Image: Ellis Monk / Google

The Fitzpatrick scale is comprised of six categories, but the MST Scale expands this to 10 different skin tones. Monk says this number was chosen based on his own research to balance diversity and ease of use. Some skin tone scales offer more than a hundred different categories, he says, but too much choice can lead to inconsistent results.

“Usually, if you got past 10 or 12 points on these types of scales [and] ask the same person to repeatedly pick out the same tones, the more you increase that scale, the less people are able to do that,” says Monk. “Cognitively speaking, it just becomes really hard to accurately and reliably differentiate.” A choice of 10 skin tones is much more manageable, he says.

Creating a new skin tone scale is only a first step, though, and the real challenge is integrating this work into real-world applications. In order to promote the MST Scale, Google has created a new website,, dedicated to explaining the research and best practices for its use in AI. The company says it’s also working to apply the MST Scale to a number of its own products. These include its “Real Tone” photo filters, which are designed to work better with darker skin tones, and its image search results.

Google will let users refine certain search results using skin tones selected from the MST Scale.
Image: Google

Google says it’s introducing a new feature to image search that will let users refine searches based on skin tones classified by the MST Scale. So, for example, if you search for “eye makeup” or “bridal makeup looks,” you can then filter results by skin tone. In the future, the company also plans to use the MST Scale to check the diversity of its results so that if you search for images of “cute babies” or “doctors,” you won’t be shown only white faces.

“One of the things we’re doing is taking a set of [image] results, understanding when those results are particularly homogenous across a few set of tones, and improving the diversity of the results,” Google’s head of product for responsible AI, Tulsee Doshi, told The Verge. Doshi stressed, though, that these updates were in a “very early” stage of development and hadn’t yet been rolled out across the company’s services.

This should strike a note of caution, not just for this specific change but also for Google’s approach to fixing problems of bias in its products more generally. The company has a patchy history when it comes to these issues, and the AI industry as a whole has a tendency to promise ethical guidelines and guardrails and then fail on the follow-through.

Take, for example, the infamous Google Photos error that led to its search algorithm tagging photos of Black people as “gorillas” and “chimpanzees.” This mistake was first noticed in 2015, yet Google confirmed to The Verge this week that it has still not fixed the problem but simply removed these search terms altogether. “While we’ve significantly improved our models based on feedback, they still aren’t perfect,” Google Photos spokesperson Michael Marconi told The Verge. “In order to prevent this type of mistake and potentially causing additional harm, the search terms remain disabled.”

Introducing these sorts of changes can also be culturally and politically challenging, reflecting broader difficulties in how we integrate this sort of tech into society. In the case of filtering image search results, for example, Doshi notes that “diversity” may look different in different countries, and if Google adjusts image results based on skin tone, it may have to change these results based on geography.

“What diversity means, for example, when we’re surfacing results in India [or] when we’re surfacing results in different parts of the world, is going to be inherently different,” says Doshi. “It’s hard to necessarily say, ‘oh, this is the exact set of good results we want,’ because that will differ per user, per region, per query.”

Introducing a new and more inclusive scale for measuring skin tones is a step forward, but much thornier issues involving AI and bias remain.


Repost: Original Source and Author Link


Google is beta testing its AI future with AI Test Kitchen

It’s clear that the future of Google is tied to AI language models. At this year’s I/O conference, the company announced a raft of updates that rely on this technology, from new “multisearch” features that let you pair image searches with text queries to improvements for Google Assistant and support for 24 new languages in Google Translate.

But Google — and the field of AI language research in general — faces major problems. Google itself has seriously mishandled internal criticism, firing employees who raised issues with bias in language models and damaging its reputation with the AI community. And researchers continue to find issues with AI language models, from failings with gender and racial biases to the fact that these models have a tendency to simply make things up (an unnerving finding for anyone who wants to use AI to deliver reliable information).

Now, though, the company seems to be taking something of a step back — or rather a slower step forward. At I/O this year, there’s been a new focus on projects designed to test and remedy problems like AI bias, including a new way to measure skin tones that the company hopes will help with diversity in machine-vision models and a new app named AI Test Kitchen that will give select individuals access to the company’s latest language models in order to probe them for errors. Think of it as a beta test for Google’s future.

Step into the AI Test Kitchen

Over a video call ahead of I/O, Josh Woodward, senior director of product management at Google, is asking Google’s latest language model to imagine a marshmallow volcano.

“You’re at a marshmallow volcano!” says the AI. “It’s erupting marshmallows. You hear a giant rumble and feel the ground shake. The marshmallows are flying everywhere.”

Woodward is happy with this answer and prods the system again. “What does it smell like?” he asks. “It smells likes marshmallows, obviously,” the AI replies. “You can smell it all around you.” Woodward laughs: “Okay, so that one was very terse.” But at least it made sense.

Woodward is showing me AI Test Kitchen, an Android app that will give select users limited access to Google’s latest and greatest AI language model, LaMDA 2. The model itself is an update to the original LaMDA announced at last year’s I/O and has the same basic functionality: you talk to it, and it talks back. But Test Kitchen wraps the system in a new, accessible interface, which encourages users to give feedback about its performance.

As Woodward explains, the idea is to create an experimental space for Google’s latest AI models. “These language models are very exciting, but they’re also very incomplete,” he says. “And we want to come up with a way to gradually get something in the hands of people to both see hopefully how it’s useful but also give feedback and point out areas where it comes up short.”

Google wants to solicit feedback from users about LaMDA’s conversational skills.
Image: Google

The app has three modes: “Imagine It,” “Talk About It,” and “List It,” with each intended to test a different aspect of the system’s functionality. “Imagine It” asks users to name a real or imaginary place, which LaMDA will then describe (the test is whether LaMDA can match your description); “Talk About It” offers a conversational prompt (like “talk to a tennis ball about dog”) with the intention of testing whether the AI stays on topic; while “List It” asks users to name any task or topic, with the aim of seeing if LaMDA can break it down into useful bullet points (so, if you say “I want to plant a vegetable garden,” the response might include sub-topics like “What do you want to grow?” and “Water and care”).

AI Test Kitchen will be rolling out in the US in the coming months but won’t be on the Play Store for just anyone to download. Woodward says Google hasn’t fully decided how it will offer access but suggests it will be on an invitation-only basis, with the company reaching out to academics, researchers, and policymakers to see if they’re interested in trying it out.

As Woodward explains, Google wants to push the app out “in a way where people know what they’re signing up for when they use it, knowing that it will say inaccurate things. It will say things, you know, that are not representative of a finished product.”

This announcement and framing tell us a few different things: First, that AI language models are hugely complex systems and that testing them exhaustively to find all the possible error cases isn’t something a company like Google thinks it can do without outside help. Secondly, that Google is extremely conscious of how prone to failure these AI language models are, and it wants to manage expectations.

Another imaginary scenario from LaMDA 2 in the AI Test Kitchen app.
Image: Google

When organizations push new AI systems into the public sphere without proper vetting, the results can be disastrous. (Remember Tay, the Microsoft chatbot that Twitter taught to be racist? Or Ask Delphi, the AI ethics advisor that could be prompted to condone genocide?) Google’s new AI Test Kitchen app is an attempt to soften this process: to invite criticism of its AI systems but control the flow of this feedback.

Deborah Raji, an AI researcher who specializes in audits and evaluations of AI models, told The Verge that this approach will necessarily limit what third parties can learn about the system. “Because they are completely controlling what they are sharing, it’s only possible to get a skewed understanding of how the system works, since there is an over-reliance on the company to gatekeep what prompts are allowed and how the model is interacted with,” says Raji. By contrast, some companies like Facebook have been much more open with their research, releasing AI models in a way that allows far greater scrutiny.

Exactly how Google’s approach will work in the real world isn’t yet clear, but the company does at least expect that some things will go wrong.

“We’ve done a big red-teaming process [to test the weaknesses of the system] internally, but despite all that, we still think people will try and break it, and a percentage of them will succeed,” says Woodward. “This is a journey, but it’s an area of active research. There’s a lot of stuff to figure out. And what we’re saying is that we can’t figure it out by just testing it internally — we need to open it up.”

Hunting for the future of search

Once you see LaMDA in action, it’s hard not to imagine how technology like this will change Google in the future, particularly its biggest product: Search. Although Google stresses that AI Test Kitchen is just a research tool, its functionality connects very obviously with the company’s services. Keeping a conservation on-topic is vital for Google Assistant, for example, while the “List It” mode in Test Kitchen is near-identical to Google’s “Things to know” feature, which breaks down tasks and topics into bullet points in search.

Google itself fueled such speculation (perhaps inadvertently) in a research paper published last year. In the paper, four of the company’s engineers suggested that, instead of typing questions into a search box and showing users the results, future search engines would act more like intermediaries, using AI to analyze the content of the results and then lifting out the most useful information. Obviously, this approach comes with new problems stemming from the AI models themselves, from bias in results to the systems making up answers.

To some extent, Google has already started down this path, with tools like “featured snippets” and “knowledge panels” used to directly answer queries. But AI has the potential to accelerate this process. Last year, for example, the company showed off an experimental AI model that answered questions about Pluto from the perspective of the former planet itself, and this year, the slow trickle of AI-powered, conversational features continues.

Despite speculation about a sea change to search, Google is stressing that whatever changes happen will happen slowly. When I asked Zoubin Ghahramani, vice president of research at Google AI, how AI will transform Google Search, his answer is something of an anticlimax.

“I think it’s going to be gradual,” says Ghahramani. “That maybe sounds like a lame answer, but I think it just matches reality.” He acknowledges that already “there are things you can put into the Google box, and you’ll just get an answer back. And over time, you basically get more and more of those things.” But he is careful to also say that the search box “shouldn’t be the end, it should be just the beginning of the search journey for people.”

For now, Ghahramani says Google is focusing on a handful of key criteria to evaluate its AI products, namely quality, safety, and groundedness. “Quality” refers to how on-topic the response is; “safety” refers to the potential for the model to say harmful or toxic things; while “groundedness” is whether or not the system is making up information.

These are essentially unsolved problems, though, and until AI systems are more tractable, Ghahramani says Google will be cautious about applying this technology. He stresses that “there’s a big gap between what we can build as a research prototype [and] then what can actually be deployed as a product.”

It’s a differentiation that should be taken with some skepticism. Just last month, for example, Google’s latest AI-powered “assistive writing” feature rolled out to users who immediately found problems. But it’s clear that Google badly wants this technology to work and, for now, is dedicated to working out its problems — one test app at a time.


Repost: Original Source and Author Link


Google AI flagged parents’ accounts for potential abuse over nude photos of their sick kids

A concerned father says that after using his Android smartphone to take photos of an infection on his toddler’s groin, Google flagged the images as child sexual abuse material (CSAM), according to a report from The New York Times. The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation, highlighting the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user’s digital library, whether on their personal device or in cloud storage.

Concerns about the consequences of blurring the lines for what should be considered private were aired last year when Apple announced its Child Safety plan. As part of the plan, Apple would locally scan images on Apple devices before they’re uploaded to iCloud and then match the images with the NCMEC’s hashed database of known CSAM. If enough matches were found, a human moderator would then review the content and lock the user’s account if it contained CSAM.

The Electronic Frontier Foundation (EFF), a nonprofit digital rights group, slammed Apple’s plan, saying it could “open a backdoor to your private life” and that it represented “a decrease in privacy for all iCloud Photos users, not an improvement.”

Apple eventually placed the stored image scanning part on hold, but with the launch of iOS 15.2, it proceeded with including an optional feature for child accounts included in a family sharing plan. If parents opt-in, then on a child’s account, the Messages app “analyzes image attachments and determines if a photo contains nudity, while maintaining the end-to-end encryption of the messages.” If it detects nudity, it blurs the image, displays a warning for the child, and presents them with resources intended to help with safety online.

The main incident highlighted by The New York Times took place in February 2021, when some doctor’s offices were still closed due to the COVID-19 pandemic. As noted by the Times, Mark (whose last name was not revealed) noticed swelling in his child’s genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection.

According to the NYT, Mark received a notification from Google just two days after taking the photos, stating that his accounts had been locked due to “harmful content” that was “a severe violation of Google’s policies and might be illegal.”

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft’s PhotoDNA for scanning uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.

In 2018, Google announced the launch of its Content Safety API AI toolkit that can “proactively identify never-before-seen CSAM imagery so it can be reviewed and, if confirmed as CSAM, removed and reported as quickly as possible.” It uses the tool for its own services and, along with a video-targeting CSAI Match hash matching solution developed by YouTube engineers, offers it for use by others as well.

Google “Fighting abuse on our own platforms and services”:

We identify and report CSAM with trained specialist teams and cutting-edge technology, including machine learning classifiers and hash-matching technology, which creates a “hash”, or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.

A Google spokesperson told the Times that Google only scans users’ personal images when a user takes “affirmative action,” which can apparently include backing their pictures up to Google Photos. When Google flags exploitative images, the Times notes that Google’s required by federal law to report the potential offender to the CyberTipLine at the NCMEC. In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s CyberTipLine, while the NCMEC alerted the authorities of 4,260 potential victims, a list that the NYT says includes Mark’s son.

Mark ended up losing access to his emails, contacts, photos, and even his phone number, as he used Google Fi’s mobile service, the Times reports. Mark immediately tried appealing Google’s decision, but Google denied Mark’s request. The San Francisco Police Department, where Mark lives, opened an investigation into Mark in December 2021 and got ahold of all the information he stored with Google. The investigator on the case ultimately found that the incident “did not meet the elements of a crime and that no crime occurred,” the NYT notes.

“Child sexual abuse material (CSAM) is abhorrent and we’re committed to preventing the spread of it on our platforms,” Google spokesperson Christa Muldoon said in an emailed statement to The Verge. “We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.”

While protecting children from abuse is undeniably important, critics argue that the practice of scanning a user’s photos unreasonably encroaches on their privacy. Jon Callas, a director of technology projects at the EFF called Google’s practices “intrusive” in a statement to the NYT. “This is precisely the nightmare that we are all concerned about,” Callas told the NYT. “They’re going to scan my family album, and then I’m going to get into trouble.”

Repost: Original Source and Author Link